entry_id
stringlengths
33
33
published
stringlengths
14
14
title
stringlengths
15
199
authors
list
primary_category
stringlengths
5
18
categories
list
text
stringlengths
1
461k
http://arxiv.org/abs/2307.00215v1
20230701034446
A Constructive Approach to Function Realization by Neural Stochastic Differential Equations
[ "Tanya Veeravalli", "Maxim Raginsky" ]
math.OC
[ "math.OC", "cs.LG" ]
Q-YOLO: Efficient Inference for Real-time Object Detection † Equal contribution. ⋆ Corresponding author. Mingze Wang ^† Huixin Sun ^†Jun Shi ^† Xuhui Liu Baochang Zhang Xianbin Cao^⋆ ======================================================================================================== The problem of function approximation by neural dynamical systems has typically been approached in a top-down manner: Any continuous function can be approximated to an arbitrary accuracy by a sufficiently complex model with a given architecture. This can lead to high-complexity controls which are impractical in applications. In this paper, we take the opposite, constructive approach: We impose various structural restrictions on system dynamics and consequently characterize the class of functions that can be realized by such a system. The systems are implemented as a cascade interconnection of a neural stochastic differential equation (Neural SDE), a deterministic dynamical system, and a readout map. Both probabilistic and geometric (Lie-theoretic) methods are used to characterize the classes of functions realized by such systems. neural dynamical systems; stochastic differential equations; geometric control theory § INTRODUCTION There is an extensive literature on function approximation by neural nets. For instance, a classical result due to Leshno et al. <cit.> states the following: Let σ : → be a continuous function which is not a polynomial. Then for any continuous f : ^n →, any compact K ⊂^n, and any > 0, there exist N tuples (c_i,a_i,b_i) ∈×^n × such that sup_x ∈ K|f(x) - ∑^N_i=1 c_i σ(a_i^ x + b_i)| ≤ (in fact, the universal approximation property holds only if σ is not a polynomial). The approximating function in (<ref>) is an instance of a neural net with one hidden layer, N hidden units (or neurons), and activation function σ. There are also universal approximation results for multilayer neural nets <cit.>, as well as depth separation results <cit.> which show (constructively) that there exist functions that admit efficient representation by multilayer neural nets, yet require exponentially more hidden units for representation by “shallower” nets. Recently, motivated in large part by the widespread use of deep learning, there has been a lot of interest in continuous-time abstractions of neural nets <cit.>, aptly termed neural ODEs. These are modeled by dynamical systems of the form ż(t) = f(z(t),w(t)), z(0) = α(x) y = β(z(1)) where the q-dimensional input x is mapped to an initial n-dimensional state z(0) by a read-in map α : ^q →^n, the p-dimensional output y is produced from the state z(1) at time t=1 by a read-out map β : ^n →^p, and f(·,w) is a family of vector fields on ^n parametrized by w ∈^m. For example, we could have something like w = (C,A,b) ∈^n × k×^k × n×^k and f(x,w) = Cσ(Ax + b), where σ is the diagonal map σ(z) (σ(z_1),…,σ(z_k))^, z ∈^k. The problem of universal approximation is then to establish the conditions under which any continuous function f : ^q →^p could be approximated, to any given accuracy , on a given compact set K ⊂^q by a system of the form (<ref>) for a suitable choice of the control law w : [0,1] →^m. This problem has been addressed in a number of recent works <cit.>. While the methods and techniques differ, the overall philosophy of the results is top-down—under suitable structural assumptions on α, β, and f(·,w), any continuous function can be approximated to any desired accuracy by choosing a suitable control w(·). The resulting controls, however, tend to exhibit fairly high complexity unlikely to be tolerated in practical applications. For example, they may be piecewise constant, but the number of pieces will grow like (1/)^d, where d depends polynomially on q <cit.>; other constructions make use of high-gain, high-frequency controls <cit.>, with similar complexity issues. §.§ A constructive approach In this paper, we approach the problem of function realization by neural dynamical systems from a constructive, bottom-up perspective. That is, instead of fixing a priori the class of functions to be approximated, we impose various structural restrictions on the system dynamics and then characterize the class of functions that can be realized by the system. To fix ideas, let us consider a cascade interconnection <cit.> of two dynamical systems of the following form: ẇ(t) = g(w(t),θ) ż(t) = f(z(t),w(t)), where g(·,θ) is a family of vector fields on ^m parametrized by θ∈^k. This system differs from (<ref>) in one key respect: The trajectory w(t) is not prescribed externally, but is instead generated internally according to (<ref>). Given the read-in map α and the read-out map β, as in (<ref>), we can realize different functions of x ∈^q by tuning the parameter θ [and, possibly, the initial condition w(0)]. Neural ODE models of this type have been considered in the literature. For example, Choromanski et al. <cit.> analyzed the case when w takes values in the orthogonal group O(n), the vector fields g(·,θ) are of the form g(w,θ) = wg̃(w,θ) for some finitely parametrized map g̃(·,θ) : ^n × n→ Skew(n), where Skew(n) is the set of all n × n skew-symmetric matrices with real entries, and f(z,w) = σ(wz), where σ is the coordinatewise application of some scalar nonlinearity σ : →, as in (<ref>). While Choromanski et al. did not explicitly characterize the class of functions that can be realized by this model, they showed that they enjoy certain stability properties that are advantageous in applications. We will use cascade models like (<ref>) as a starting point, but will replace the deterministic dynamics (<ref>) with a stochastic differential equation (SDE) driven by a multidimensional Brownian motion. The rationale for using such neural SDEs <cit.> is twofold: First, by using Brownian motion processes as generalized inputs <cit.>, we will be able to generate all the internal complexity we need while only varying the global parameters of the SDE. Second, it provides us with a sampling-based mechanism for constructing finite approximations <cit.>. Indeed, if F(x;ω) is a random function of x, then the expected value f(x) _ω[F(x;ω)] can be closely approximated by finite sums of the form f̂_N(x) = 1/N∑^N_i=1F(x;ω_i), where ω_1,…,ω_N are independent copies of ω, with the L^2 approximation error [(f(x)-f̂_N(x))^2] scaling as O(1/N) under suitable assumptions on the second moment of F(x;ω). (While finite approximation is not the focus of this paper, our earlier work <cit.> contains a discussion of sampling in the context of neural SDEs.) §.§ Previous results In an earlier work <cit.>, we have obtained some constructive realizability results for a certain class of neural SDEs. Briefly, we have considered n-dimensional Itô SDEs X_t = a(X_t; θ) t + b(X_t; θ) V_t, X_0 = x with finitely parametrized drift and diffusion coefficients satisfying the uniform ellipticity condition—the eigenvalues of the matrix b(x;θ)b(x;θ)^ are uniformly bounded away from 0. Let v be a fixed (nonrandom) vector in ^n. We say that, for each choice of θ, the above system realizes the function f(x;θ) [Y] = [v^ X_1] of the initial condition x. We have shown that f(x;θ) can be approximated by f(x;θ) ≈ c_1∫_^n v^ z ·exp(-c_2 I(x,z;θ)) z, where c_1,c_2 are some constants and where I(x,z; θ) is the minimum of the control energy 1/2∫^1_0 |v(t)|^2 t among all sufficiently regular (e.g., L^2) controls v : [0,1] →^n that transfer the state of the deterministic control-affine system ẋ(t) = a(x(t);θ) + b(x(t);θ)v(t) from x(0) = x to x(1) = z. Thus, the problem of characterizing the class of functions realized in this way reduces to analyzing deterministic minimum-energy control problems, and some upper and lower bounds on I(x,z;θ) are given in <cit.>. However, the assumption of uniform ellipticity is rather restrictive and, in particular, rules out the interesting cases when the diffusion process in (<ref>) is degenerate, yet the deterministic system (<ref>) is nevertheless completely controllable <cit.>. § THE BASIC MODEL AND ITS PROPERTIES We start by presenting a relatively simple model which will allow for a clean analysis. It has the following form: W_t B = a(W_t;θ) t + b(W_t;θ) ∘ V_t Z_t = h(W_t,t)Z_t t Y = Z_1 σ(W_1^ x) Here, (<ref>) is a Stratonovich SDE for an n-dimensional diffusion process (W_t) with a deterministic initial condition W_0 = w_0. It is driven by an m-dimensional Brownian motion (V_t), and its drift and diffusion coefficients are parametrized by a k-dimensional vector of parameters θ. We assume throughout that the drift and the diffusion coefficients are sufficiently well-behaved (e.g., Lipschitz) to guarantee existence and uniqueness of strong solutions of the SDE. The process (Z_t) is scalar, initialized at Z_0 = 1, and h : ^n × [0,1] → is a continuous function. The real-valued output Y is obtained by multiplying Z_1 by σ(W_1^ x). By adding an equation X_t = 0 initialized at X_0 = x, we can view the above system as a stochastic variant of (<ref>) with internal state (W_t,Z_t,X_t), read-in map α(x) = (w_0,1,x), and read-out map β(w,z,x) = zσ(w^ x). The output Y of (<ref>) is a random function of x. We take its expected value as the function realized by this system: f(x;θ) [Z_1σ(W^_1x)], where in the left-hand side we have explicitly indicated the dependence of this function on the parameters θ of the SDE (<ref>). The class of functions realized in this way can be characterized as follows: The function (<ref>) realized by the system (<ref>) can be expressed as f(x;θ) = [exp(∫^1_0 h(W_t,t) t)σ(W^_1 x)]. Moreover, let ^θ be the second-order linear differential operator ^θφ(w) ã(w;θ)^∇φ(w) + 1/2 tr{ b(w;θ)b(w;θ)^∇^2 φ(w)} where ã(w;θ) = a(w;θ) + 1/2∑^m_i=1∂ b_i/∂ w(w; θ) b_i(w; θ) is the Itô-corrected drift, and where b_i(w;θ) is the ith column of b(w;θ). Consider the PDE ∂ u/∂ w(w,t;x,θ) = ^θ u(w,t;x,θ) + h(w,t)u(w,t;x,θ) with the initial condition u(w,0; x,θ) = σ(w^ x). If the function w ↦σ(w^ x) is in the domain of ^θ, then f(x;θ) = u(w_0,1; x,θ). The equation (<ref>) for Z_t can be solved for each trajectory (W_t): Z_t = Z_0 exp(∫^t_0 h(W_s,s) s ) = exp(∫^t_0 h(W_s,s) s ). Substituting this into the expression for Y and taking expectations, we obtain (<ref>). The operator ^θ is the infinitesimal generator of the diffusion process (W_t) governed by the SDE (<ref>). Since w ↦σ(w^ x) is in the domain of ^θ, the function u(w,t;x,θ) = [exp(∫^t_0 h(W_s,s) s)σ(W^_tx)|W_0 = w] is a solution of (<ref>) subject to the initial condition u(w,0;x, θ) = σ(w^ x) by the (converse of) the Feynman–Kac theorem <cit.>. A more elaborate set-up, in the spirit of <cit.>, is as follows: W_t = a(W_t;θ) t + b(W_t;θ) ∘ V_t Z_t = σ(W_t Z_t) t Y = v^ Z_1. The processes (W_t) and (Z_t) in Eqs. (<ref>) and (<ref>) are now evolving in ^n × n and ^n, respectively, and v in (<ref>) is a fixed vector in ^n. With the initial condition W_0 = w_0 fixed, we introduce the read-in map α(x) = (w_0,x) and the read-out map β(w,z) = v^ z. The stochastic dynamics in (<ref>) generates the matrices of weights W_t, which are used to control the dynamics of internal activations Z_t in (<ref>). The cascade form of the system allows us to first generate the random trajectory (W_t) and then, conditionally on that, solve the ODE (<ref>). As before, we say that the model (<ref>) realizes the function f(x;θ) = [Y] = [v^ Z_1]. In contrast to (<ref>), where we were able to characterize the class of realized functions in a relatively clean manner, the determination of (Z_t), given a realization (W_t), involves solving a time-inhomogeneous ODE Z_t/ t = g_W_t(Z_t), Z_0 = x where we have defined g_W_t(z) σ(W_tz). We can compactly express the function realized by (<ref>) using the chronological exponential notation <cit.> as f(x;θ) = [v^exp(∫^1_0 g_W_t t)x ], which is just shorthand for the dependence of the solution of (<ref>) on the initial condition x and on the trajectory (W_t), but any further analysis would involve asymptotic expansions in terms of iterated integrals and Lie derivatives. §.§ The role of Lie theory The above construction is quite general as it does not specify the form of the drift and the diffusion coefficients a(w;θ) and b(w;θ) in (<ref>) and in (<ref>). A priori we would expect that, in order to make the model (<ref>) sufficiently expressive (i.e., to realize a sufficiently rich class of functions), we would need a and b to depend nonlinearly on both w and θ. This intuition, however, is misleading since we can let the functions h and σ do all the “nonlinear work” when the Lie algebra generated by the vector fields a(·;θ), b_1(·;θ), …, b_m(·;θ) is finite-dimensional. Thus, let be a finite-dimensional Lie algebra of vector fields on ^n and let g_1,…,g_d be a basis of , d =. For θ = (θ_ij)_i = 0,…,m, j = 1,… d, take a(w;θ) = ∑^d_j=1θ_0j g_j(w) and b_i(w;θ) = ∑^d_j=1θ_ij g_j(w), i = 1, …, m. Since is finite-dimensional, it is isomorphic to a subalgebra of gl(ℓ,) (the Lie algebra of ℓ×ℓ real matrices with the commutator bracket [A,B] = AB - BA) for some finite ℓ by Ado's theorem <cit.>. Thus, without loss of generality (and by changing n if necessary) we may take g_i(w) = G_i w for some G_i ∈^n × n, so (<ref>) will take the form W_t = A(θ) W_t t + ∑^m_i=1 B_i(θ)W_t ∘ V^i_t, where V^1_t,…,V^m_t are m independent scalar Brownian motion processes, and where A(θ) = ∑^d_j=1θ_0j G_j, B_i(θ) = ∑^d_j=1θ_ij G_j, i = 1, …, m. As an illustration of the expressive capabilities of this type of linear parametrization, we can consider Brockett's construction of a diffusion process on the sphere <cit.>: Let A,B_1,…,B_m be n × n skew-symmetric matrices and consider the n-dimensional Stratonovich SDE W_t = AW_t t + ∑^m_i=1 B_i W_t ∘ V^i_t. or the equivalent Itô SDE W_t = (A + 1/2∑^m_i=1B^2_i)W_t t + ∑^m_i=1 B_i W_t V^i_t. Applying Itô's rule gives (W^_t W_t) = W^_t (A+A^ + ∑^m_i=1B^2_i) W_ t + ∑^m_i=1 W^_t (B_i + B^_i) W_t V^i_t + ∑^m_i=1 B^_i B_i t = 0 so the Euclidean norm |W_t| stays constant for all t. Thus, if we choose w_0 ∈ S^n-1, then the random trajectory (W_t)_t ≥ 0 will be confined to S^n-1. The algebra Skew(n) of n × n skew-symmetric matrices has dimension d = n(n-1)/2, so we can express (<ref>) in the form (<ref>) with a vector θ of k = (m+1)n(n-1)/2 parameters. Similar considerations apply to the matrix-valued case. For example, if A,B_1,…,B_m are n × n skew-symmetric matrices and the matrix Stratonovich SDE W_t = AW_t t + ∑^m_i=1B_i W_t ∘ V^i_t, is initialized with W_0 = w_0 ∈ O(n), then all W_t will take values in O(n) as well <cit.>. With regards to the role of σ, let us consider the family of the vector fields g_W(z), defined in (<ref>). With typical choices of the nonlinearity σ : → (e.g., the hyperbolic tangent σ(r) = tanh r), the Lie algebra generated by will be infinite-dimensional. To see this, suppose that σ is C^∞. Let g,g' denote the vector fields g_W,g_W' for two matrices W,W' ∈^n. Then, using the formula ∂/∂ zσ(Wz) = σ^(1)(Wz)W for the Jacobian of g_W, where σ^(1)(z) diag(σ'(z_1),…,σ'(z_n)), a straightforward but tedious computation of the iterated Lie brackets ad^k+1_g g' = ad_g ( ad^k_g g'), k = 0, 1, … where ad^0_g g' g' and ad_g g' [g,g'], yields vector fields whose coordinates involve derivatives of σ of arbitrary orders. As an example, consider the case σ(r) = tanh r. Since the derivative of σ satisfies the relation σ'(r) = 1-σ^2(r), the kth iterated Lie bracket ad^k_g g' will involve polynomials of degree k+1 in the entries of σ(Wz) and σ(W'z). Hence, using the linear independence of the monomials 1,r,r^2,…, it is easy to see that the Lie algebra generated by will be infinite-dimensional. Moreover, the Lie algebra generated by may be infinite-dimensional even if σ is a polynomial. Consider, for example, σ(r) = 1 + r^3; then even for n=1 the Lie algebra generated by will contain polynomial vector fields of arbitrary degree (and thus any continuous function can be approximated arbitrarily well on any given compact set by some element of this Lie algebra <cit.>). This is in sharp contrast with the result of Leshno et al. <cit.> on the necessity of nonpolynomial continuous activation functions for universal approximation of continuous functions by neural nets with one hidden layer. §.§ Elaborations and extensions The basic model in (<ref>) can be extended in various ways. For instance, we can consider matrix-valued processes W_t and replace the output equation (<ref>) with Y = Z_1 v^σ(W_1 x), where W_t takes values in ^p × n and v ∈^p is a fixed vector. Another possibility is to let W_t = (U_t,W̃_t) take values in ^p ×^p × n and let Y = Z_1 U_1^σ(W̃_1 x). If h ≡ 0 in (<ref>), then the class of functions realized in this way is the closed convex hull of neural nets with one hidden layer and with arbitrarily many hidden units <cit.>. To see this, let (U^ν,W̃^ν), ν = 1,…,N, be independent copies of (U_1,W̃_1). (In other words, we just run N independent copies of the stochastic dynamics (<ref>) in parallel.) Then, assuming U_t evolves on the unit sphere S^n-1, W̃_t evolves on the orthogonal group O(n), and the nonlinearity σ : → is bounded, the function f̂_N(x) 1/N∑^N_ν = 1 (U^ν)^σ(W^ν x) will be a good approximation of f(x) = [U^_1 σ(W_1x)], for N sufficiently large, by the law of large numbers. Fig. <ref> shows a simple illustration of the ability of such models to realize neuron-like functions of the form f(x) = σ(w^ x). The role of the function h in (<ref>) is to bias or regularize the trajectories (W_t) in some manner, as far as their contribution to the value of f(x;θ) goes. For example, if some reference trajectory (ξ_t)_0 ≤ t ≤ 1 is given, we could take h(w,t) = - |w - ξ_t|^2. In that case, we have f(x;θ) = [σ(W^_1 x)exp(-∫^1_0 |W_t - ξ_t|^2 t )] which has the effect of penalizing those trajectories (W_t) that differ too much from the reference. The class of functions realized by the model (<ref>) will be in general richer than that realized by (<ref>). By way of illustration, consider the SDE (<ref>) for a diffusion process on the orthogonal group O(n). Suppose that the skew-symmetric matrices A,B_1,…,B_m are such that the deterministic system ẇ(t) = (A + ∑^m_i=1v_i(t)B_i) w(t) is controllable on O(n) <cit.>, i.e., any two matrices w_0,w_1 ∈ O(n) can be joined by a curve lying along the trajectory w(t) of (<ref>) generated by some piecewise constant controls v_1(t),…,v_m(t). (For this, it suffices to consider the case w_0 = I_n <cit.>.) Now, the probability law of the random path (W_t)_t ≥ 0 in O(n) starting at W_0 = w_0 is concentrated on the closure of the set of all trajectories of (<ref>) with w(0) = w_0 generated by piecewise constant controls <cit.>, which by controllability covers the entire O(n). This, in turn, will allow us to sample high-complexity trajectories for controlling (<ref>). Finally, some comments on the generality of the considered models are in order. Both (<ref>) and (<ref>) have a cascade structure, where the stochastic dynamics of the weights W_t is autonomous (for each choice of the parameters θ), and then these weights are used as generalized inputs <cit.> to the controlled dynamics of the internal activations Z_t. Moreover, the parameters θ explicitly enter only the dynamics of W_t. However, we can consider other models of neural SDEs discussed in the literature, for example a stochastically excited continuous-time recurrent neural net X_t = (- X_t + σ(X_t)) t + ∑^m_i=1 B_i X_t ∘ V^i_t parametrized by m matrices B_1,…,B_m. In models of this type, there is no explicit separation of the internal state into weights and activations. However, we can use the results of Freedman and Willems <cit.> and Krener and Lobry <cit.> to show that such neural SDEs can be simulated, in a certain sense, by systems of the cascade type. To that end, consider the n-dimensional stochastic system X_t = f(X_t) t + g(X_t; θ) ∘ V_t Y_t = v^ X_t with initial condition X_0 = x, where the columns of the n × m matrix g(x;θ) are smooth vector fields on ^n with a smooth dependence on a k-dimensional parameter vector θ. Then we have the following result: Suppose that the Lie algebra generated by g_1(·;θ),…,g_m(·;θ), θ∈^k, has finite dimension d. Then there exist smooth vector fields b_1,…,b_d on ^d, smooth functions β_ij : ^k →, 1 ≤ i ≤ m, 1 ≤ j ≤ d, h : ^n ×^d →^n, φ : ^d ×^n →^n, and an almost surely positive stopping time τ, such that the cascade system W_t = ∑^m_i=1(∑^d_j=1β_ij(θ)b_j(W_t)) ∘ V^i_t Z_t = h(Z_t,W_t) t with W_0 = 0, Z_0 = x simulates (<ref>) up to time τ, i.e., Y_t = v^φ(W_t,Z_t), 0 ≤ t < τ. The above theorem shows that we can simulate a system like (<ref>), at least locally, by a cascade system like (<ref>) by means of a smooth reparametrization θ↦ (β_ij(θ)) and a smooth “decoding map” φ(w,z) = x that converts the internal weights w ∈^d and internal activations z ∈^n into the overall state x ∈^n. Observe that the transformed parameters β_ij(θ) enter into (<ref>) linearly and that the vector fields b_1,…,b_d and the maps h and φ do not have any dependence θ. Moreover, since we have not imposed any restrictions on the drift vector field f (apart from the regularity conditions needed to ensure existence and uniqueness of Stratonovich solutions), the Lie algebra generated by the vector fields h(·,w) may be infinite-dimensional. We closely follow the proof of Theorem 1 in <cit.> (the special case of Abelian was considered in an earlier paper by Freedman and Willems <cit.>). Let g̃_1,…,g̃_d be a basis of , so that g_i(x;θ) = ∑^d_j=1β_ij(θ) g̃_j(x), i = 1,…,m for some smooth maps β_ij : ^k →. Let φ(w,z) e^w_1 g̃_1∘ e^w_2 g̃_2∘…∘ e^w_d g̃_dz, where e^t g̃_i denotes the flow map of g̃_i, i.e., e^tg̃_iz is the solution, at time t, of the ODE ż(t) = g̃_i(z(t)), z(0) = z. We have φ(0,x) = x. Using the methods of <cit.>, we can show that there exist smooth vector fields b_1,…,b_d on ^d and a neighborhood of (0,x), such that ∂φ/∂ w(w,z) b_i(w) = g̃_i(φ(w,z)), i = 1,…,m for all (w,z) ∈. Consequently, ∂φ/∂ w(w,z) (∑^d_j=1β_ij(θ)b_i(w)) = g_i(φ(w,z); θ) for all 1 ≤ i ≤ d and all θ∈^k. Now, φ(w,z) is equal to the solution, at t = d, of the time-inhomogeneous ODE ξ̇(t) = v(ξ(t),t), z(0) = z with v(z,t) w_d-i+1g̃_d-i+1(z), i-1 ≤ t < i and the Jacobian ∂φ/∂ z(w,z) is equal to the solution, at t=d, of the variational equation Λ̇(t) = ∂ v/∂ z(ξ(t),t)Λ(t), Λ(0) = I_n. Hence, it is invertible, so we take h(z,w) (∂φ/∂ z(w,z))^-1 f(φ(w,z)) for all (w,z) ∈. Let (W_t,Z_t) be the solution of (<ref>) with (W_0,Z_0) = (0,x), and let τ be the first exit time from : τinf{ t > 0 : (W_t,Z_t) ∉}. Then, since Stratonovich differentials obey the rules of ordinary calculus, for 0 ≤ t < τ we have X_t = φ(W_t,Z_t) = ∂φ/∂ w(W_t,Z_t) ∘ W_t + ∂φ/∂ z(W_t,Z_t) ∘ Z_t = f(φ(W_t,Z_t)) t + ∑^m_i=1 g_i(φ(W_t,Z_t); θ) ∘ V^i_t = f(X_t) t + ∑^m_i=1 g_i(X_t; θ) ∘ V^i_t and v^φ(W_t,Z_t) = v^ X_t = Y_t. This gives the desired simulation property. § ACKNOWLEDGMENTS T. Veeravalli would like to thank A.J. Havens for a PyTorch crash course. M. Raginsky would like to thank A. Belabbas for useful suggestions in the early stages of this work.
http://arxiv.org/abs/2307.01900v1
20230704195754
Concept-Based Explanations to Test for False Causal Relationships Learned by Abusive Language Classifiers
[ "Isar Nejadgholi", "Svetlana Kiritchenko", "Kathleen C. Fraser", "Esma Balkır" ]
cs.CL
[ "cs.CL", "cs.AI" ]
Probing general relativistic spin-orbit coupling with gravitational waves from hierarchical triple systems Miguel Zumalacárregui August 1, 2023 ========================================================================================================== Classifiers tend to learn a false causal relationship between an over-represented concept and a label, which can result in over-reliance on the concept and compromised classification accuracy. It is imperative to have methods in place that can compare different models and identify over-reliances on specific concepts. We consider three well-known abusive language classifiers trained on large English datasets and focus on the concept of negative emotions, which is an important signal but should not be learned as a sufficient feature for the label of abuse. Motivated by the definition of global sufficiency, we first examine the unwanted dependencies learned by the classifiers by assessing their accuracy on a challenge set across all decision thresholds. Further, recognizing that a challenge set might not always be available, we introduce concept-based explanation metrics to assess the influence of the concept on the labels. These explanations allow us to compare classifiers regarding the degree of false global sufficiency they have learned between a concept and a label. Content Warning: This paper presents examples that may be offensive or upsetting. § INTRODUCTION In various natural language classification tasks, particularly in abusive language detection, certain concepts are known to be strong signals for the label of interest. These concepts are often over-represented in the respective class of the training set, making them susceptible to being learned as potential causes for the label. Consequently, the classifier over-relies on these concepts and ignores the broader context, leading to reduced generalizability <cit.>. Hence, to ensure models are robust and reliable, it is crucial to develop methods that can detect these over-reliances in various natural language classification tasks. In the context of abusive language detection, we consider the concept of negative emotions. The presence of an expression associated with negative emotion is an important signal for detecting abusive language and has been used in feature-based systems before <cit.>. Crucially, in some examples, negative emotion words might be the cause for the abusive label, i.e., the sentence might not be abusive if the negative emotion word is replaced with other words (e.g., I know these people. They are disgusting). However, at the global level, the relationship between negative emotion and abusive language is a strong correlation, not causation, as it is neither globally necessary nor globally sufficient for the label of abuse.[Phenomenon P is globally sufficient for the Phenomenon Q, if whenever P happens, Q happens too. P is globally necessary for Q if whenever Q happens, P happens, too <cit.>.] Negative emotions are not globally necessary for the label of abuse because there are abusive sentences that do not contain any negative emotion words (e.g., offensive jokes, stereotyping and microaggressions). Also, words evoking negative emotions are not globally sufficient for a sentence to be abusive when interpreted in a broader context (e.g., We should admit that in our society, they are oppressed.). But, an end-to-end model might learn that negative emotion in a sentence is globally sufficient for that sentence to be abusive. Such a classifier will struggle in classifying non-abusive sentences that contain negative emotion words leading to a lack of generalizability. An example of such a case is shown in Figure <ref>. Specifically, classifiers' over-reliance on the negative emotion signal can inadvertently discriminate against marginalized groups since their communications (e.g., discussing their experiences of discrimination and marginalization) can contain negative emotion words and, therefore can be wrongly considered abusive. We explore a scenario where a user, aware of the importance of negative emotions for their use case, wants to evaluate and compare a set of trained models. Their goal is to identify and eliminate those models that are more prone to generating inaccurate results due to an overemphasis on negative emotions as primary indicators. For that, we use concept-based explanations to test if a model has learned a false global causal relationship between a user-identified concept and a label where the true relationship is a correlation. Note that global causal relationships explain the model's output across an entire dataset, as opposed to local causal explanations, which concern the dependency of an individual prediction on a specific input feature. Concept-based explanations are a class of explainability methods that provide global explanations at the level of human-understandable concepts <cit.>. While local explanations help the users understand the model's reasoning for an individual decision with respect to the input features, global explanations are critical in comparing the processes learned by models and selecting the one that best suits the needs of a use case <cit.>. Global explanations might be obtained at the level of input features through aggregating local explanations <cit.>. Alternatively, global-by-design methods (e.g., probing classifiers <cit.>) can be used to gain insights at higher levels of abstractions, such as linguistic properties or human-defined concepts.[Here, we use the term “feature” to refer to the latent representations of a semantic concept learned by a classifier.] Similar to most feature importance explainability methods (e.g, <cit.>), concept-based explanations are originally designed to measure the importance of a concept. The intuitive meaning of importance usually refers to correlation, and it can be interpreted differently based on two notions of causality: necessity and sufficiency <cit.>. Local explainability methods usually focus on features that are of high local necessity or high local sufficiency for the label <cit.>, thus considered important by human users. However, at the global level, all features must be interpreted in a larger context for accurate decision-making. We aim to determine if concept-based explanations can be utilized to evaluate whether a trained binary classifier for abusive language detection has learned a false global sufficiency relationship between the label and the concept of negative emotion. Our code and data are available at <https://github.com/IsarNejad/Global-Sufficiency/tree/main>. Our main contributions are: * We formalize the issue of over-reliance on a concept as falsely learned global sufficiency. For the task of an abusive language classifier, we consider concepts related to negative emotion as being important but not globally sufficient for the label of abuse. We discuss how learning these concepts as globally sufficient results in compromised classification accuracies. * Based on our formalization of false global sufficiency, as a baseline method, we measure the over-reliance of models on a human-defined concept using an unseen challenge set that contains the concept in both classes. Recognizing that various classifiers may have a distinct range of optimal decision thresholds, we assess the over-reliance on a concept across all possible decision thresholds and show that one of the classifiers over-relies on emotion-related concepts significantly more than the other two classifiers. * Taking the challenge set approach as a baseline for comparison, we propose novel concept-based explanation metrics, demonstrating that similar conclusions about the degree of false global sufficiency can be drawn using these metrics. Building on previous work, we modify the TCAV procedure to measure not only the feature's importance but also the extent of its impact on the label. We conclude that a concept-based method is preferable as it eliminates the need for manual data curation. § CONCEPT-BASED EXPLANATIONS Concept-based explanations evaluate the model’s decision-making mechanism at the level of a human-defined concept expected to be important for the task <cit.>. Specifically, we use the Testing Concept Activation Vectors (TCAV) method to measure the influence of a human-defined concept on the model's predictions <cit.>. The idea of TCAV is based on the observation that human-understandable concepts can be encoded as meaningful and insightful information in the linear vector space of trained neural networks <cit.>. A Concept Activation Vector (CAV), which represents the concept in the embedding space, is a vector normal to a hyperplane that separates concept and non-concept examples. Such a hyperplane is obtained by training a linear binary classifier to separate the representations of concept and non-concept examples in the embedding space. Although TCAV can be applied to all neural network classifiers, for simplicity we limit our experiments to binary RoBERTa-based abusive language classifiers. We choose the RoBerta-based models for their superior performance in processing social media data compared to other base language models <cit.>. The concept, C, is defined by N_C concept examples. Also, N_R random examples are used to define non-concept examples. The RoBERTa representations for all these examples are calculated using f_emb, which maps an input text to its [CLS] token representation. Then, P number of CAVs, υ_C^p, are generated, each through training a linear classifier that separates a sub-sample (with size N_c) of concept examples from a sub-sample of random examples (with size N_r) in the RoBERTa embedding space. The conceptual sensitivity of a label to the CAV, υ_C^p, at input x can be computed as the directional derivative S_C,p(x): S_C,p(x) = lim_ϵ→ 0h(f_emb(x)+ϵυ_C^p) -h(f_emb(x))/ϵ = ▽ h(f_emb(x)).υ_C^p where h maps the RoBERTa representation to the logit value of the class of interest. In this work, we use two metrics to specify the influence of the concept on the model's prediction. First, we calculate TCAV_dir, the fraction of inputs in a set of input examples X, for which the directional derivative S_C,p(x) is positive, i.e.: TCAV_dir^C,p = | x ∈ X:S_C,p(x)>0|/|X| TCAV_dir indicates the fraction of input examples for which the prediction scores of the model increase if the input representation is infinitesimally moved towards the concept representation. This metric has been widely used to identify if the label has learned the concept as an important signal for the label <cit.>. Besides the widely used metric of TCAV_dir (referred to as TCAV score in previous work), we introduce a new metric, TCAV_mag, which considers the size of the directional derivatives, and measures the magnitude of the influence of the concept on the label for the positive directional derivatives: TCAV_mag^C,p = ∑_x∈ X,S_C,p(x)>0^S_C,p(x)/|X| We demonstrate in our results that TCAV_mag can be an indicator of the over-reliance of the label on the concept. When calculated for all CAVs, Equations <ref> and <ref> generate two distributions of scores with size P for the concept C. Using a t-test, these distributions are compared with the distributions of TCAV_dir and TCAV_mag calculated for random examples to check for statistical significance <cit.>. § FALSE GLOBAL SUFFICIENCY Phenomenon P is considered globally sufficient for phenomenon Q (P⇒ Q) if, whenever P occurs, Q also occurs <cit.>. In other words, global sufficiency refers to the extent to which a concept can explain the model's output across all instances in a held-out dataset, as opposed to the more studied topic of local sufficiency, which concerns the stability of an individual prediction for a given feature in perturbed contexts <cit.>. In a real-world setting, it is very unlikely that any single concept is truly sufficient for the label at a global level. In a binary classifier, a concept C is falsely learned as sufficient for the positive label if all inputs containing C are classified as positive by the classifier, regardless of context. This undesired dependency of the label on the concept suggests that the model has failed to learn how the concept interacts with context to influence the label. While this issue is closely related to spurious correlation, we use the term false global sufficiency because spurious correlation typically implies that the feature is irrelevant to the label, and a correlation is learned due to a confounding factor. In contrast, we consider the cases where the feature is relevant and important but not globally sufficient. To make this clearer, consider the case of abusive language detection and the concept of negative emotions; if the mere presence of negative emotions in a sentence always guarantees the prediction of the positive label (abuse), then the model has learned a false sufficiency relation between the concept and the label. It over-relies on this feature and ignores the context. To quantify falsely learned global sufficiency, we consider two scenarios: 1) where a balanced challenge set is available, which contains C in all of its examples (both classes), and 2) where no challenge set is available. For the first scenario we use the traditional approach of assessing accuracy of the classifier on a held-out test set. This approach provides a baseline in our evaluations. For the second scenario, we propose concept-based explanation metrics and compare them with the baselines obtained with the challenge sets. §.§ Quantifying the Falsely Learned Global Sufficiency with a Challenge Set Based on our definition of global sufficiency, one way to assess a model's over-reliance on a concept is to evaluate its performance on a held-out challenge set, 𝔽, containing both positive and negative examples of the concept of interest <cit.>. For simplicity, we assume that this challenge set consists of equal numbers of positive and negative examples. If a model learns a high global sufficiency between the concept C and the label of abuse, all examples in both positive and negative classes of a challenge set 𝔽 will be labeled as abusive. However, if the model interprets the concept in context, only the positive examples of 𝔽 will receive the abusive label. This indicates that in cases where the decision threshold of the classifier clearly separates the probability distributions of the two classes, the model has learned a low global sufficiency between the concept and the label. However, when comparing different classifiers, it is important to note that a reliable classifier should perform well (high precision and high recall) over a broad range of decision thresholds. This is because different applications may require different thresholds depending on the desired trade-off between precision and recall. For example, a classifier used to moderate social media content may need to prioritize precision over recall, which could mean using a high threshold to avoid false positives. On the other hand, a classifier used to detect all instances of abusive language may need to prioritize recall over precision, which would mean using a lower threshold to catch as many instances of abuse as possible, even if it means tolerating more false positives. Therefore, a classifier that is reliable over a wide range of decision thresholds can be more effective in different use cases, making it more practical and adaptable. Figure <ref> demonstrates two hypothetical cases for the distribution of probabilities that the classifiers might generate for the challenge set 𝔽. A classifier that learned low global sufficiency between C and the positive label generates easily separable distributions of probabilities for the positive and negative examples of 𝔽. In other words, for a large range of decision thresholds, the two classes of 𝔽 are separable, and high accuracy is achieved. Conversely, the classifier that has learned high global sufficiency between C and the positive label assigns a similar distribution of probabilities to both negative and positive examples. The two classes of 𝔽 are hardly separable, and for a wide range of thresholds, the accuracy is low. Note that in order for this classifier to be accurate, it requires a careful adjustment of the decision threshold with a labeled dataset. However, this process can be very costly. Based on this discussion, we argue that AUC_Challenge, the area under the curve of accuracy vs threshold, is a quantitative indicator of the separability of two classes of 𝔽 for all decision thresholds. According to our definition above, global sufficiency is negatively correlated with the separability of these classes. Therefore, False_Suff, described in Equation <ref>, is a quantitative metric that can be used to compare the degree of sufficiency learned by the classifiers based on 𝔽: False_Suff = 1-AUC_Challenge §.§ Quantifying the Falsely Learned Global Sufficiency with Concept-Based Explanations The practical application of the method detailed in Section <ref> can be limited due to the necessity of creating a custom challenge set. In this section, we use concept-based explanation to measure the falsely learned global sufficiency in a scenario where a challenge set is not available, but a lexicon representing the concept of interest exists. Following the approach of <cit.>, we employ short templates and the concept lexicon to generate unlabeled concept examples. Then, we utilize the method described in Section <ref> to compute two metrics: TCAV_dir and TCAV_mag. If the TCAV_dir value for the concept significantly deviates from that of random concepts, it indicates that the classifier has learned an association between the label and the concept. A significant difference in TCAV_mag compared to random concepts suggests a strong influence of the concept on the label, potentially causing the classifier to disregard the context when the concept is present. While the absolute values of these metrics might not be definitive, we show that they can be used to compare various classifiers in terms of the degree of global sufficiency they have learned for a concept. § SUFFICIENCY OF THE CONCEPT OF DESCRIBING PROTECTED GROUPS WITH NEGATIVE EMOTION In this section, we evaluate the metrics introduced in Section <ref> in explaining the extent of the falsely learned sufficiency between a human-defined concept and the positive label of the classifiers. We specifically consider the concept of describing a protected group with negative emotion words and refer to it as DesNegEm for brevity. We chose this concept because it is tightly related to hate speech and is expected to be important for more general definitions of harmful language, such as toxic, abusive or offensive. Still, it is not a sufficient concept for these labels and has to be interpreted in the broader context (as shown by examples in Table <ref>). We consider three RoBERTa-based binary classifiers, publically available and trained with large English datasets. The models are trained for general definitions of abusive language, toxicity or offensive language. We refer to these classifiers by their training datasets: Jigsaw, Civil Comments (or Civil for brevity) and TweetEval. These models are described in detail in Appendix <ref> Quantifying Sufficiency with a Challenge Set: To calculate the metric described in Section <ref>, we first use the HateCheck <cit.> test cases to build a challenge set for the concept of DesNegEm. For that, we use the F2 and F21 functionalities of HateCheck, i.e., the hateful and non-hateful examples that include this concept (Table <ref>). Figure <ref> shows the distribution of probabilities that the three classifiers generate for this challenge set. We observe that, for a large range of decision thresholds, all three classifiers label the majority of the examples of both classes of the challenge set with a positive label. In other words, all three classifiers have learned a high sufficiency between DesNegEm and the label of abuse. However, the extent of the learned sufficiency is different among the classifiers. The TweetEval classifier makes the least differentiation between the two classes and generates similar distributions of probabilities for negative and positive examples with the DesNegEm concept. Because of this overlap between probability distributions of positive and negative classes, the accuracy of this classifier is low over all ranges of thresholds, as shown in Figure <ref>. The false sufficiency learned by the Jigsaw and the Civil Comments classifiers is less extreme, and Jigsaw makes the most differentiation between the two classes. This observation can be quantified with the False_Suff metric (Equation <ref>) using the area under the curves in Figure <ref>. We obtain False_Suff of 0.41, 0.40, and 0.50 for Civil, Jigsaw and TweetEval, respectively. This metric shows a higher falsely learned sufficiency score for TweetEval than the two other classifiers, as expected. Based on these observations, we expect TCAV metrics to show lower scores for Civil and Jigsaw than the TweetEval classifier. Global Sufficiency with Concept-Based Explanations: Here, we use the results obtained with the challenge set as a baseline to evaluate the TCAV-based metrics. Concept examples are generated using the template `<protected_group> are <emotion_word>.', where <protected_group> is one of the protected groups women, trans people, gay people, black people, disabled people, Muslims and immigrants as identified by <cit.>. For <emotion_word>, we use the disgust and anger categories of the NRC Emotion Intensity Lexicon (NRC-EIL) <cit.>. We use the NLTK package[<https://www.nltk.org/>] to filter out words other than adjectives, past tense verbs and past participles, and also remove the words with emotion intensity lower than 0.5. After these steps, we are left with 368 concept words. We calculate the TCAV_dir and TCAV_mag scores for the concept of DesNegEm and compare those to the metrics calculated for random concepts with t-test for statistical significance. For random concepts, the concept examples are random tweets collected with stop words. In our implementation of the TCAV procedure, N_R = 1000, N_c= 50, N_r= 200 and N_C = 386 (number of filtered lexicon words). For input examples, X, we use 2000 tweets collected with stop words. As presented in Table <ref>, for the Civil classifier, TCAV_dir is not significantly different from the random concept, indicating that the concept information might not always be encoded as a coherent concept in the embedding space of this classifier. However, TCAV_mag is significantly higher than random, indicating that when the information is encoded well, the presence of this concept has a significant influence on the label of abuse. The other two classifiers have learned a strong association between the concept and the label, i.e., when the concept is added to a neutral context, the likelihood of the positive label increases. However, only in the case of the TweetEval classifier, TCAV_mag is significantly different from the random concepts, indicating a strong influence of the concept on the label, which might override the context. Therefore, for TweetEval the distribution of generated probabilities is mostly determined by the concept, not the context (similar distributions are obtained for the positive and negative examples of the challenge set). The other two classifiers consider the context to some extent and generate relatively different distributions of probabilities for the two classes. Discussion: For all classifiers, the presence of the concept describing a protected group with negative emotion words is a strong signal for the label of abuse. All classifiers struggle in considering the broader contexts in sentences such as `It is not acceptable to say <protected_group> are disgusting.' Among the three classifiers, TweetEval has learned a higher degree of sufficiency, leading to its worse performance on a challenge set containing this concept. The TCAV metrics can be used to compare the classifiers regarding the false sufficiency relationships they have learned. These metrics provide similar insights to what is learned from assessing global sufficiency with a challenge set. § GLOBAL SUFFICIENCY OF FINE-GRAINED NEGATIVE EMOTIONS CONCEPTS In the previous section, we considered the concept of describing protected groups with negative emotions, which is tightly related to hate speech, and thus prone to be mistakenly learned as sufficient for the label of abuse. In this section, we test our proposed method for a less obvious case by disentangling the concept of emotions and hate speech. We focus on the concept of describing a (non-protected) group of people with negative emotions, which differs from the previous section in 1) removing the protected groups and replacing them with unprotected groups and 2) breaking down the emotion concept to more fine-grained levels. For fine-grained emotion concepts, we first develop a compact challenge set, examples of which are presented in Table <ref>. Since we consider non-protected groups in this challenge set, the examples are labeled as abusive/non-abusive as opposed to hateful/non-hateful in HateCheck (shown in Table <ref>). We assess the sufficiency of these concepts with the challenge set first and then compare the results to those of the proposed concept-based explanation metrics. Our goal is to investigate if the findings for the broad concept of describing protected groups with negative emotions can also be replicated at a more nuanced level of emotional granularity. We analyze the models for fine-grained categories of negative emotions, identified by <cit.>, namely disgust, anger, sadness, and fear. Similar pre-processing steps to what was described in Section <ref> were performed to filter the lexicon in each category of emotions. For the challenge set, we write five abusive and five non-abusive example templates for each emotion. Then we generate 40 abusive and 40 non-abusive examples by replacing <group> with one of the terms Canadians, Chinese people, doctors, teachers, school children, football players, my neighbours, and men to represent non-protected groups.[Though nationality may be considered a protected characteristic in some contexts, we include “Canadian” and “Chinese” here since nationality was not included in HateCheck and therefore not covered in the previous section.] Full list of examples of this challenge set is available in our GitHub repository mentioned in Section <ref>. Equivalently, for the TCAV procedure for concept templates, we use `They are <emotion_word>', instead of `<protected_group> are <emotion word>', which we used in Section <ref>. §.§ Results We first compare the three classifiers in handling negative emotions by investigating the results they produce for the challenge set. The False_Suff scores in Table <ref> show that TweetEval has learned the highest sufficiency between these concepts and the label of abuse and therefore achieves the lowest separability between the positive and negative classes of the challenge set. To further clarify this we show the accuracy vs threshold curve for the disgust category of the challenge set in Figure <ref>. We observe that TweetEval only reaches high accuracies for a small range of thresholds, i.e, it generates a similar distribution of probabilities for the positive and negative classes that contain the emotion of disgust. On the other hand, Jigsaw has learned the least global sufficiency and reaches high accuracy over a wide range of thresholds. Then we turn to the TCAV scores shown in Table <ref>. First, TCAV_dir shows that the Civil Comments classifier is not significantly sensitive to negative emotions, i.e., the feature of negative emotions is not fully learned as a coherent feature by this classifier. TweetEval, on the other hand, shows significant TCAV_dir and TCAV_mag scores, indicating that this classifier is not only sensitive to these concepts but the influence of the concept on the label is also significantly high. Jigsaw is the classifier that has learned the dependency between negative emotions and the label of abuse and therefore is sensitive to it (as indicated by TCAV_dir), but the magnitude of the influence of concept on the label is not significantly high, and the concept is interpreted in the larger context. Interestingly, the magnitude of the influence of disgust and anger is higher than fear and sadness for all classifiers, stating a higher association of disgust and anger with abusive language. These results are in line with conclusions drawn from assessing global sufficiency with a challenge set. § RELATED WORKS Most of the explainability works in NLP focus on feature importance methods to measure the importance of an input feature for the prediction at the local level <cit.>. However, recent works highlight that models should be assessed beyond feature importance criteria and that the reasoning behind the model's decisions should be investigated through explainability methods. Some examples of such explainability methods include counterfactual reasoning <cit.> or necessity and sufficiency metrics <cit.>. Also, there is a need to compare various classifiers at the global level. Although local explanations can be aggregated to generate global explanations, they are usually obtained through costly interventions and are not practical to be applied on a large scale. For global explanations, a popular approach is to train probing classifiers <cit.>. However, probes only identify whether a classifier has learned a feature but stay silent about whether the feature is used in predictions <cit.>. Amnesic probing is an extension of probing classifiers that identifies whether removing a feature influences the model's predictions, which relates to the notion of the global necessity of a human-understandable concept for a prediction <cit.>. Our work, on the other hand, focuses on the global sufficiency of concepts. While probing classifiers are applied to linguistic properties such as POS tagging, which are necessary for accurate language processing, we focus on human-defined semantic concepts that are known to be important for the label and test if they have been falsely learned as a sufficient cause for the label. Concept-based explanations have been introduced in computer vision and are mostly used to explain image classification models <cit.>. In NLP, concept-based explanations were used to measure the sensitivity of an abusive language classifier to the emerging concept of COVID-related anti-Asian hate speech <cit.>, to assess the fairness of abusive language classifiers in using the concept of sentiment <cit.>, and to explain a text classifier with reference to the concepts identified through topic modelling <cit.>. To the best of our knowledge, our work is the first that uses concept-based explanations to assess the sufficiency of human-defined concepts in text classification. § CONCLUSION Concept-based explanations can assess the influence of a concept on a model's predictions. We used two metrics based on the TCAV method: the TCAV direction score identifies whether the classifier has learned an association between a concept and a label, and the TCAV magnitude score measures the extent of the influence of the concept on the label. We showed that the best-performing abusive language classifiers learned that negative emotion is associated with abuse (positive direction) but did not over-rely on this concept (low magnitude); that is, they did not overestimate the global sufficiency of that concept. Our method can potentially be used for other NLP classification tasks. This approach is suitable for tasks where certain concepts are closely related to the label, but not enough to make a definitive determination. For example, in sentiment analysis, the price of products may have a strong connection to negative sentiment, but is insufficient to determine it. Further research should explore how concept-based explanations can help identify cases where certain concepts are relied upon too heavily in abusive language detection or other NLP classification tasks. § LIMITATIONS Our work has limitations. First, we use the TCAV framework, which assumes that concepts are encoded in the linear space of semantic representations. However, recent works show that in some cases, linear discriminants are not enough to define the semantic representations of concepts in the embedding spaces <cit.>. Future work should consider nonlinear discriminants to accurately represent concepts in the hidden layers of NLP neural networks. In this study, we used simple challenge sets to obtain a baseline for assessing the effectiveness of concept-based explanations in measuring false global sufficiency. Future work should focus on curating challenge sets by annotating user-generated data for the label and the concepts, in order to achieve a stronger baseline. Our work is limited to pre-defined concepts and requires human input to define the concepts with examples. However, defining concepts in TCAV is less restrictive than pre-defining features in other explainability methods, in that concepts are abstract ideas that can be defined without requiring in-depth knowledge of the model's inner workings or the specific features it is using. This allows for a more flexible approach where users can test the model regarding their concept of interest. Our method can only be applied to concepts that are known to be important for the classifier and are prone to being over-represented in training sets. It's important to check this condition independently before using our metrics. In cases where this condition does not hold true, the metrics we use in our work may be interpreted differently and may not be reliable indicators of global sufficiency. Also, we only considered two variations of emotion-related concepts. Other variations such as expression of negative emotions by the writer of the post should be investigated in future work. Further, our metrics are limited to cases where different classifiers are being compared since the most important information is in the relative value of the metrics. Our metrics should not be used as absolute scores for testing a classifier. Testing a classifier for false causal relationships is most valuable for detecting the potential flaws of the models. If our metrics do not reveal a false relationship between the concept and the label, that should not be interpreted as an indicator of a flawless model. § ETHICAL STATEMENT As with most AI technology, this approach can be used adversely to exploit the system's vulnerabilities and produce toxic texts that would be undetectable by the studied classifier. Specifically, for methods that require access to the model's inner layers, care should be taken so that only trusted parties could gain such access. The obtained knowledge should only be used for model transparency purposes, and the security concerns should be adequately addressed. Regarding environmental concerns, contemporary NLP systems based on pre-trained large language models, such as RoBERTa, require significant computational resources to train and fine-tune. Larger training datasets, used for fine-tuning, usually result in better classification performance but also an even higher computational cost. To lower the cost of this study and its negative impact on the environment, we chose to use existing, publicly available classification models. acl_natbib § MODELS We include the following publicly available abusive language classification models in this study: * Jigsaw[<https://huggingface.co/SkolkovoInstitute/roberta_toxicity_classifier/tree/main>]: a RoBERTa-based binary toxicity classifier fine-tuned on the combination of two datasets created by Jigsaw and used in Kaggle competitions on toxicity prediction in 2018-2020. The first dataset, Wikipedia Toxic Comments <cit.>, includes 160K comments from Wikipedia talk pages. The second dataset, Civil Comments <cit.>, comprises over 1.8M online comments from news websites. Both datasets are annotated for toxicity (and its subtypes) by crowd-sourcing. The model creators report the AUC of 0.98 and F1-score of 0.76 on the Wikipedia Toxic Comments test set. The model is released under CC BY-NC-SA 4.0. * Civil Comments[<https://huggingface.co/unitary/unbiased-toxic-roberta>] <cit.>: a multi-class RoBERTa-based model fine-tuned on the Civil Comments dataset to predict toxicity and six toxicity subtypes (severe toxicity, obscene, threat, insult, identity attack, and sexual explicit). A part of the dataset is annotated for identity groups targeted in toxic comments. The prediction model is trained to optimize the outcome fairness for the groups in addition to the overall accuracy. This is achieved through the loss function that combines the weighted loss functions for two tasks, toxicity prediction and identity prediction <cit.>. * TweetEval[<https://huggingface.co/cardiffnlp/twitter-roberta-base-offensive>] <cit.>: a RoBERTa-based binary classifier to detect offensive language, released as part of the TweetEval evaluation benchmark. The model was trained on 58M tweets and then fine-tuned on the Offensive Language Identification Dataset (OLID) <cit.>. The OLID training set comprises about 12K tweets. The model achieved the macro-averaged F1-score of 77.1 on the OLID test set.
http://arxiv.org/abs/2307.02988v1
20230706134221
UAV Swarms for Joint Data Ferrying and Dynamic Cell Coverage via Optimal Transport Descent and Quadratic Assignment
[ "Kai Cui", "Lars Baumgärtner", "Burak Yilmaz", "Mengguang Li", "Christian Fabian", "Benjamin Becker", "Lin Xiang", "Maximilian Bauer", "Heinz Koeppl" ]
cs.NI
[ "cs.NI", "eess.SP" ]
UAV Swarms for Joint Data Ferrying and Dynamic Cell Coverage via Optimal Transport Descent and Quadratic Assignment Kai Cui, Lars Baumgärtner, Burak Yilmaz, Mengguang Li, Christian Fabian, Benjamin Becker, Lin Xiang, Maximilian Bauer, Heinz Koeppl This work has been co-funded by the LOEWE initiative (Hesse, Germany) within the emergenCITY center, the State of Hesse and HOLM as part of the "Innovations in Logistics and Mobility" programme of the Hessian Ministry of Economics, Energy, Transport and Housing (HA project no.: 1010/21-12), and the Hessian Ministry of Science and the Arts (HMWK) within the projects "The Third Wave of Artificial Intelligence - 3AI" and hessian.AI. The authors are with the Departments of Electrical Engineering and Information Technology, Computer Science, and Mechanical Engineering, Technische Universität Darmstadt, 64287 Darmstadt, Germany. (e-mail: {heinz.koeppl}@tu-darmstadt.de). ========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================= Both data ferrying with disruption-tolerant networking (DTN) and mobile cellular base stations constitute important techniques for UAV-aided communication in situations of crises where standard communication infrastructure is unavailable. For optimal use of a limited number of UAVs, we propose providing both DTN and a cellular base station on each UAV. Here, DTN is used for large amounts of low-priority data, while capacity-constrained cell coverage remains reserved for emergency calls or command and control. We optimize cell coverage via a novel optimal transport-based formulation using alternating minimization, while for data ferrying we periodically deliver data between dynamic clusters by solving quadratic assignment problems. In our evaluation, we consider different scenarios with varying mobility models and a wide range of flight patterns. Overall, we tractably achieve optimal cell coverage under quality-of-service costs with DTN-based data ferrying, enabling large-scale deployment of UAV swarms for crisis communication. UAV swarms, data ferrying, cell coverage, alternating minimization, quadratic assignment problem § INTRODUCTION In recent years, Unmanned Aerial Vehicles (UAV) have become essential tools for professional first responders thanks to their mobility and versatility. Today they are mainly used for taking aerial pictures of disaster areas and sensor readings autonomously. In our highly connected world, communication is vital for coordination of civilians, professional responders and even (partially) autonomous systems or IoT. Thus, when communication infrastructure is disrupted, alternative means of communication must be quickly established. For such challenging conditions where connectivity is intermittent or unavailable for long periods, Delay-Tolerant Networking (DTN) provides a commonly used way to enable resilient communication without stable end-to-end connectivity. In this store-carry-and-forward approach, data is stored at intermediate nodes and forwarded to the next node until connectivity is available. Thus, message delivery depends on the mobility of the participating nodes, which act as "data mules" to physically transfer the data. UAVs can be rapidly deployed to affected areas during emergencies such as natural disasters. By carrying wireless communication equipment, UAVs can provide temporary communication coverage to DTNs and data ferrying in remote or disaster-stricken areas. UAVs can also fly to areas difficult to access, such as mountainous or forested areas, or survey areas and collect data useful for disaster response. Even though researchers have proposed UAVs as data ferries in the past <cit.>, there remain many open challenges for practical deployments. Finding optimal coverage of the nodes on the ground is essential. If the locations of ground nodes are not known, flight patterns must either be optimized for area coverage, or a sweep of the area must be performed to detect nodes before providing coverage. Furthermore, nodes on the ground might also be moving, and different radio link technologies have different characteristics such as communication range, bandwidth, etc. Limited resources available on the UAV, e.g., data storage capacities and battery power, are major challenges to take into account when planning flight trajectories and missions. Finally, a swarm of UAVs can increase the communication capabilities significantly, but requires additional effort to coordinate and cooperate. Here, we address the optimal coverage of nodes in a tractable manner, amenable to large swarms and dynamically moving user nodes. At the same time, another approach to communication via UAVs is based on the usage of UAVs as mobile cellular base stations. With the miniaturization of base station equipment, exploiting UAVs as aerial base stations to reinforce or complement cellular coverage has attracted significant interests in academia, industry, and standardization organizations such as the 3rd Generation Partnership Project (3GPP) <cit.>. Unlike the DTN scenario, aerial base stations are designed to provide reliable, energy-efficient cellular communication services with guarantees for e.g. communication data rates, reliability, delays, and information timeliness, even in challenging emergency situations. While aerial base stations can enable flexible deployments and relocation, establish strong line-of-sight (LoS) channels, and enhance resilience of ground networks amid malfunction, they also need to overcome several new challenges such as time-varying topologies, blockages in urban environments, and LoS interference <cit.>. Moreover, UAVs are often subject to limited size, weight, power, and radio resources. In order to deal with resource constraints in crisis scenarios, it is thus of importance to consider not only DTN-based communication, but also the availability of dynamic cellular coverage as a second layer of communication infrastructure reserved for high-priority data such as emergency calls or command and control of dynamically moving users on the ground. Therefore, it is important to enable both data ferrying and cellular coverage simultaneously, as depicted in Fig. <ref>. In this work, for cell coverage, we generalize optimal transport formulation for optimal UAV swarm cell association <cit.> to include UAV capacities that need not be fully assigned to all users. Furthermore, in contrast to previous work, we then solve both coverage and data ferrying jointly. The latter is achieved by first solving the coverage problem via alternating minimization for dynamically tracking capacity-constrained clusters, and then applying combinatorial optimization problems such as the traveling salesman problem (TSP) and related quadratic assignment problems (QAP) for inter-cluster data ferrying. Our contributions can be summarized as: (i) Formulating an optimal-transport-based problem and algorithm for optimal cell coverage via UAVs with novel capacity constraints; (ii) Enabling distance-optimized and delay-optimized data ferrying by transporting data between user clusters via combinatorial optimization problems; (iii) Combining data ferrying with cell coverage in order to optimally use available UAVs for both DTN and cellular base stations, giving extensive theoretical and empirical support for the algorithm by considering different scenarios with varying mobility models and a wide range of flight patterns. To the best of our knowledge, our work constitutes one of the first to combine data ferrying with cell coverage, with applicability in resource-constrained scenarios or crises. We begin by describing the underlying scenario and our proposed algorithms. We then move on to a theoretical and empirical evaluation, including a small real demonstration over a city model using indoor UAVs. Lastly, we close by discussing related and future work. § BACKGROUND In this section, we provide a brief background on the scenario and model on which we base our algorithms. Precise values for variables used in evaluation are given in Sec. <ref>. Consider a potentially large number of users on the ground, and many UAVs for data ferrying and cell coverage with limited capacity, such that each UAV may serve only a limited number of users. We assume that at any time t ∈ℕ there are users i = 1, …, M at time-dependent positions X_i(t) = (x_i(t), y_i(t)) ∈𝒳 in the area of operations 𝒳⊆ℝ^2, following a certain mobility model as will be discussed in Sec. <ref>. This can be understood e.g. as mobility data of a city's mobile phone users over the day, or mobility behavior obtained from surveying in crisis situations. Analogously, for each UAV j = 1, …, N we define UAV positions ξ_j(t) ∈𝒳, which can be moved at a maximum velocity v in order to achieve dynamic user coverage and data ferrying. We write 𝐗(t) for the vector of all user locations, and similarly ξ(t) for UAV locations. §.§ Delay-tolerant networking Delay- or disruption-tolerant networking (DTN) uses a Store, Carry & Forward architecture to cope with intermittent network connectivity. Data is transmitted as bundles on a hop-to-hop basis, thus, no stable end-to-end route between a source and a destination is needed. Instead, intermediate nodes act as data mules, physically carrying data around until it reaches its destination. Therefore, besides routing strategies such as epidemic flooding and PRoPHET <cit.>, node mobility plays an essential role in bundle delivery rates for such networks. The official standard protocol as defined by the IETF DTN Working Group is the Bundle Protocol (BP) version 7 (RFC 9171) <cit.>. The first goal of our work is thus to use UAVs as controllable mobile nodes to improve the DTN message delivery rates and delays. In our work, we consider epidemic flooding via WiFi, using the TheONE simulator <cit.>, where we generate one message every T_DTN > 0 time units at a random ground user, with another random ground user as destination. §.§ Cell coverage In contrast to DTN, cell coverage defines the problem of covering all users optimally at any time, using UAVs as mobile base stations. For example, one could provide LTE coverage via UAVs <cit.> for uplink and downlink data communication to a limited number of ground users within range r > 0. To solve the problem of providing optimal cell coverage, we assume that the underlying cell association problem of associating ground users to UAVs at any specific time instant is solved optimally through an optimal transport formulation. More specifically, we conceptually generalize the optimal transport procedure introduced in <cit.> by allowing UAV base stations to have arbitrary constraints on total capacity, i.e. not all UAVs must be fully assigned to users, and vice versa. §.§.§ Optimal cell association Using a cost function d such as distance on 𝒳 to define costs d(X, ξ) of associating users at position X with a UAV at position ξ, we define the optimal transport cost W_d ( 1/M∑_i=1^Mδ_X_i(t), 1/N∑_j=1^N δ_ξ_j(t)) (see e.g. <cit.>) between the locations (empirical distribution) of users 1/M∑_i=1^Mδ_X_i(t) and UAVs 1/N∑_j=1^N δ_ξ_j(t). Choosing d(X, ξ) = 1_(r, ∞)(‖ X - ξ‖_2), allows the optimal transport cost to formalize the notion of best achievable coverage under optimal cell association <cit.> for a maximum communication range of r > 0, assuming that the total capacity of all UAVs is equal to the total number of users on the ground, i.e. all UAVs must be fully assigned to users. §.§.§ Capacity constraints So far, the optimal transport formulation assumes the same amount of mass between UAVs and users, i.e. all UAVs must be fully assigned to users, and vice versa. To model differing capacity cases, we formally embed the user positions in the extended space 𝒳̅𝒳×{0, 1} by X̅(t) (X(t)^T, 0)^T, and similarly for UAVs ξ̅(t) (ξ(t)^T, 0)^T. We then instead use d(X̅, ξ̅) = 1_(r, ∞)(‖X̅ - ξ̅‖_2) ·1_{1}(X̅_3) ·1_{1}(ξ̅_3), where ·_3 denotes the last component (zero for real UAVs and users), to allow for UAVs to not be fully assigned to users (in case of more capacity than needed by users): This is done by adding virtual users at (0, 0, 1) ∈𝒳̅ to obtain the new cost c(𝐗̅(t), ξ̅(t)) = W^vec_d(𝐗̅(t), ξ̅(t)) W_d (1/MR∑_i=1^M δ_X̅_i(t) + (1 - 1/R) δ_(0, 0, 1), 1/N∑_j=1^N δ_ξ̅_j(t)) which formalizes the best achievable coverage for higher capacity of UAVs than required by users, i.e. if more capacity is available than required by users, the unused UAV capacity is assigned to a virtual user at zero cost. Here we add a mass of (1 - 1/R) virtual users, where R > 1 denotes the ratio between total UAV capacity and total ground users. An analogous argument with virtual UAVs and R < 0 allows for the case with less total UAV capacity than required by users. §.§.§ Quality of service (QoS) Lastly, in practice, another point of consideration is quality of service (QoS), e.g. the quality of downlink or uplink communication. We may optimize e.g. the sum of both the number of covered users and their QoS, using d_QoS(X̅, ξ̅) = 1_(r, ∞)(‖X̅ - ξ̅‖_2) ·1_{1}(X̅_3) ·1_{1}(ξ̅_3) + ‖X̅ - ξ̅‖_2 ·1_[0, r](‖X̅ - ξ̅‖_2) where QoS cost increases linearly with distance, i.e. QoS decays linearly. In practice, one could use e.g. a signal-to-noise ratio. § ALGORITHM DESIGN Next, we discuss the approach used in our work to achieve scalable algorithms with decentralized deployment capability for (i) maximizing cell coverage, and (ii) ferrying data between cell clusters in synchronous updates. §.§ Optimal cellular coverage Our model provides a mathematical foundation to the cell coverage problem where UAVs have a limited capacity of users. What remains is to find optimal UAV locations ξ(t) at any time t. We propose an iterative assignment method which finds local optima with theoretical convergence guarantees of the otherwise hard problem. More specifically, we apply alternating minimization (AM) on an optimal transport formulation of the coverage problem, generalizing algorithms for the NP-hard k-means <cit.> or k-medoids <cit.> problem in clustering, by limiting cluster sizes. A simple example is illustrated in Fig. <ref>. We find optimal positions of UAVs ξ for given current locations of users, such that when the UAVs track these positions over time, the cell coverage of users is optimized. Assume a set of M' users 𝐗̅∈𝒳̅^M', including virtual users. For any UAV (or cluster) i, we keep assignments C_i ⊆ [M'] of users to a UAV with 𝐂 = (C_i)_i ∈ [N]. If 𝐂 constitutes the currently optimal assignment of users to UAVs (an optimal transport plan <cit.>) given that the UAVs are located at ξ̅, then the optimal transport cost (<ref>) is given by J_𝐗(t)(𝐂, ξ) ∑_i ∈ [N]∑_j ∈ C_i d(ξ̅_i, X̅_j) = W^vec_d(𝐗̅(t), ξ̅(t)). Therefore, we need to minimize (<ref>) over all drone locations ξ and associated optimal transport plans 𝐂. For this purpose, we apply an AM algorithm to minimize the optimal transport coverage problem by iterating 𝐂^(n+1) = _𝐂 J_𝐗(t)(𝐂^(n), ξ^(n)) ξ^(n+1) = _ξ∈𝒳^N J_𝐗(t)(𝐂^(n+1), ξ^(n)) over iterations n. The algorithm monotonically improves (<ref>) and is thus guaranteed to converge. In other words, our algorithm repeatedly computes optimal transport solutions in (<ref>), and reassigns locations of UAVs in (<ref>) to monotonically improve the current assignment of locations, which gives us at any time t a locally optimal set of UAV locations. In the first step (<ref>), we compute optimal transport plans 𝐂 for fixed ξ via the POT library <cit.>, which uses a linear program formulation as in <cit.>. However, the exact computation of minimal ξ^(n+1) in the second step (<ref>) is out of reach via a facility location problem <cit.>, or a Wasserstein barycenter problem <cit.> even in the limiting relaxation of infinitely many UAVs (i.e. a large mean field UAV swarm). Hence, we instead use the medoids approach of choosing cluster centroids from a restricted set of user positions ξ^(n+1) = _ξ⊆ X(t) J_𝐗(t)(𝐂, ξ) where we write ξ⊆ X(t) whenever for all i = 1, …, N we have ξ_i ∈{X_1(t), …, X_M(t)}. The advantages of this approach are that we can easily compute the minimum by summing over distances and choosing the minimum _ξ⊆ X(t) J_𝐗(t)(𝐂, ξ) = _i=1^N _ξ_i ∈ X(t)∑_j ∈ C_i d(ξ̅_i, X̅_j), and that it ensures user-to-UAV connectivity also for DTN purposes by tracking one ground user's position via UAVs. As a side remark, in order to avoid oscillating assignment of users that are out of reach to different clusters, in the algorithm we use a slightly adapted, "leaky" QoS distance measure d̂(X̅, ξ̅) = 1_(r, ∞)(‖X̅ - ξ̅‖_2) ·1_{1}(X̅_3) ·1_{1}(ξ̅_3) · (r + 0.01 ‖X̅ - ξ̅‖_2) + ‖X̅ - ξ̅‖_2 ·1_[0, r](‖X̅ - ξ̅‖_2) that is not constant on ‖X̅ - ξ̅‖_2 > r, to ensure cluster stability. We assign UAVs to track the desired cluster locations. We update clusters dynamically by rerunning the algorithm after each time interval of Δ T time units. Warm starting with the last centroid positions keeps the cluster locations stable over time even if users are moving, allowing for UAVs to consistently track a moving cluster. This is important, since the users on the ground move over time (Fig. <ref>), and we would avoid clusters to be bound to specific users or locations. Instead, clusters dynamically readjust as underlying users move, allowing for certain users to switch from one cluster to another, or for the centroid to move with the users. §.§ Optimal inter-cluster DTN data ferrying All that remains is to rotate UAVs over clusters to facilitate data ferrying between clusters. Assuming that DTN messages are spread among users of any single cell cluster, the goal is to transport data between all clusters. We realize this diffusion of data by fixing an update interval T_rot > 0 after which all UAVs change from their current cluster to another. Here, sufficiently high T_rot allows for good coverage during tracking of clusters by UAVs, and each rotation of UAV-to-cluster assignments also allows for a good spread of DTN messages. We introduce two formulations for optimization of energy and DTN latency. §.§.§ TSP-based rotation Given T_rot, a simple solution that minimizes energy costs associated with the total distance travelled by UAVs, and still ensures that all UAVs eventually reach all clusters, is given by the solution to a TSP. We find the shortest cycle in terms of distance travelled at the time of computation, and let all UAVs rotate through the cycle to eventually spread all messages between all clusters. As a result, if users are static or slow in comparison to the update time T_rot > 0, the total distance travelled by UAVs is D_tot = (N-1) ⟨𝐃, 𝐖⟩_F = (N-1) tr( 𝐃𝐖 ), assuming that any i-th and (i+1)-th clusters are connected on the cycle (see also Fig. <ref>), where 𝐃 is the current distance matrix between cluster centroids D_ij = d(ξ_i, ξ_j), 𝐖 is the weight matrix with entries w_ij = 1 whenever j = i + 1 N, and ⟨·, ·⟩_F denotes the Frobenius inner product defined by elementwise multiplication and summing of all entries. By considering any possible permutation of cluster indices, the TSP can then briefly be written as the QAP <cit.> min_π∈Sym([N])tr( 𝐃𝐏_π𝐖𝐏_π^T) where π is any permutation of [N] {1, …, N} from the symmetry group Sym of [N], and 𝐏_π its corresponding permutation matrix. The TSP can then be solved using any standard method, for which we use OR-Tools <cit.>. An advantage of this method is that the distances travelled by UAVs during rotation are minimized, i.e. the method saves energy. A disadvantage is its slow diffusion of information between clusters, as it takes up to N-1 rotations to transport an existing message from one cluster to another. A variant of this method is that we can even use less UAVs than required to cover all clusters by assigning each UAV to track a cluster, and filling the other clusters with virtual UAVs. However, a disadvantage of using less UAVs than clusters is that full coverage is no longer achieved, making this solution suitable only if not enough UAVs are available to achieve full coverage of users. The full algorithm is given in Algorithm <ref>. §.§.§ Binary-jumping rotation As an alternative, one may consider a more delay-optimized ferrying behavior which ensures that any existing message takes at most ⌈log N ⌉ iterations to rotate from one cluster to another. The solution is to rotate all UAVs by 2^⌈log N ⌉-1 clusters, then by 2^⌈log N ⌉-2 and so on. Assuming good intra-cluster connectivity, this ensures that in each rotation (except potentially the last), the messages cached in any cluster are duplicated from one cluster to another that does not have the messages yet, leading to an optimal, exponentially fast spread of messages. The total distance travelled is then given by D_tot = ⟨𝐃, 𝐖̂⟩_F = tr( 𝐃𝐖̂ ), where 𝐖̂ is now the weight matrix parametrizing all jumps with entries w_ij = 1 whenever j ∈{i + 2^k N | k=1, …, ⌈log N ⌉}, resulting again in a QAP min_π∈Sym([N])tr( 𝐃𝐏_π𝐖̂𝐏_π^T). The problem is similar in structure to the TSP and is likely hard, as its natural formulation is a QAP, which is known for being NP-hard in general and having few tractable special cases <cit.>. In particular, the matrix 𝐖̂ is a circulant matrix, and 𝐃 is symmetric. However, 𝐃 is neither anti-Monge nor Kalmanson, and hence does not fall into the few known, easy cases <cit.>. Since an analytic or exact solution is difficult as in TSP, we propose an approximate algorithmic solution. Though the problem is similar in structure to TSP, we are not aware of special algorithms for the considered QAP. We hence use a genetic algorithm (GA) <cit.> with crossover operators used in TSP to solve for the optimal ordering of clusters. More precisely, we apply the well-known cycle crossover (CX) as e.g. found in <cit.>. The GA is given in Algorithm <ref>. § ALGORITHMIC PROPERTIES We briefly state some properties of our algorithms, with proofs in the appendix for readability. Convergence of AM We know that the AM algorithm always converges, consistently resulting in good coverage. Algorithm <ref> monotonically converges in the cost (<ref>). Impact of data ferrying on coverage In the static or quasi-static clustered regime, where UAVs are significantly faster than users, v ≫ v_user, we can obtain bounds on the time-averaged (QoS-free) coverage c̅ = -∫_0^∞ c(𝐗(t), ξ(t)) dt. In the static case, for any p_cover > 0, there exists N > 0 and T_rot > 0 such that a fraction p_cover of time-averaged coverage can be achieved. We also note that in case of at least twice the capacity necessary to cover all users, one could split UAVs into two equal sets, and rotate the locations of each set alternatingly such that one set of UAVs always provides coverage while the other provides DTN coverage. However, in this work, we focus on constrained scenarios where capacity is more limited. Bounds on data ferrying delivery time In the connected cluster case assuming full and instantaneous intra-cluster connectivity (i.e. messages are shared fully among participants of each cluster), we can also show bounds on the DTN delivery time for the TSP-based and binary-jumping. In the static connected cluster case, for any T_rot > diam(𝒳)/v, the maximum time-to-delivery of DTN messages does not exceed 2 T_rot⌈log N ⌉ for binary-jumping, and 2 T_rot N for TSP-based rotation. Time complexity The solutions we provided are scalable not only to many users, but also to many UAVs. For simplicity, with R=1 we have a theoretical complexity of 𝒪(max(N,M)^3) with an empirical complexity of 𝒪(max(N,M)^2) <cit.> for computation of optimal transport solutions in Algorithm <ref>, and complexity 𝒪(N^2) for computing the medoids. Analogously, for data ferrying, though the TSP and binary-jumping QAP are hard in theory, in practice they can be solved as long as one can evaluate the fitness of the QAP formulation as a sum of N ⌈log N ⌉∈𝒪(Nlog N) elements. Overall, our approach can be scaled up to large systems with many users and UAVs, which we also verify experimentally. § EVALUATION In this section, we evaluate our algorithm, comparing between TSP-based rotation, binary-jumping, as well as some baselines. The evaluation parameters are found in Table <ref>. §.§ Scenarios For evaluation, apart from illustrative examples of Gaussian user clusters, both on a circle (Fig. <ref>) and at city locations over a real city model (Fig. <ref>), we consider three advanced mobility models with a simulated duration of 43200 (12). Scenarios contain 100 nodes moving in an area of 8 x 8. All movement traces, 10 per mobility model, were generated using BonnMotion <cit.> and the following configurations: RWP As a simple baseline, we use a random waypoint model <cit.> where nodes move with speeds between 1 and 5, with maximum pause time of 30. SMOOTH For a more realistic representation of human mobility, SMOOTH <cit.> uses a power-law distribution of statistics with node clusters. We use 40 clusters, an alpha for the flight distribution of 1.45 (min: 1, max: 14000) and beta for the pause time distribution of 1.5 (min: 10, max: 1800). SLAW Another common model for simulating human mobility using a power-law distribution of statistics is SLAW <cit.>. Here, we use a pause time between 10 and 50, 500 waypoints, a distribution weight of 3.0, hurst of 0.75 and a beta value of 1.0. A snapshot is seen in Fig. <ref>. §.§ Numerical results In the following, we demonstrate the results of our work on joint data ferrying and cell coverage by numerical simulations. Some qualitative examples are found in Fig. <ref> and Fig. <ref>. Evaluation metrics We present the results of proposed approaches in comparison to reference schemes, considering: average time to delivery (TTD) for a message to reach its recipient; probability of delivery (p_deliver) for a message successfully reaching its recipient by end of scenario; time-averaged cell coverage (c̅); and total distance travelled by UAVs (D_tot). Since we are unaware of prior algorithms for joint cell coverage and DTN ferrying, as baseline we compare against the RWP heuristic, where UAVs repeatedly travel to a uniformly random point in 𝒳, and the circular heuristic, where UAVs i travel on a circle of radius r_i = 4· (i / N) around zero. Quantitative evaluation Overall, in Table <ref> we see that our approaches are effective at providing both DTN message delivery and coverage, compared to the heuristics. We find that binary jumping performs best. Due to mobility of nodes and possibly disconnected clusters, in contrast to Proposition <ref> for the static connected case, TSP-based rotation improves over binary-jumping in TTD, though the confidence interval does not allow a definitive conclusion. Meanwhile, binary-jumping excels in travelled distance since less overall jumps mean less passing time between full rotations, for users to move and change the optimal distances (<ref>), (<ref>) computed at the beginning of each rotation in the QAP. In Fig. <ref>, we see that TTD is improved by using more UAVs, and distance travelled scales inversely with rotation time as expected. Lastly, in Fig. <ref>, as predicted by Proposition <ref>, high coverage is achieved by tuning the number of UAVs or update interval. Simultaneously, delivery probabilities remain high. Scalability of algorithms Finally, we verify the scalability of our proposed approach as discussed in Section <ref>: In Fig. <ref>, the time to run Algorithm <ref> for 100 epochs n on a 2.4 Quad-Core Intel Core i5 is shown for various numbers of ground users M and UAVs N, and already remains feasible for real online scenarios (significantly below 100 Δ T). Nonetheless, the algorithm is not parallel and uses NumPy <cit.> and POT <cit.>, the run time of the algorithms scales close-to-linearly with the considered N and M. Parallelization and code optimization may further improve the run time. § RELATED WORK Existing literature has considered UAVs either for data ferrying using DTN or to provide flying base stations for cellular communications. A vast literature have already been devoted to addressing aerial base stations <cit.>. More recent results have considered on the one hand, advanced physical layer communication techniques such as high-throughput relaying using buffer onboard UAVs <cit.>, directive transmission using high-gain directional antenna and gimbal-assisted adaptive antenna steering <cit.>, beamforming using aerial reconfigurable intelligent surface (RIS) <cit.>, and exploiting multiple UAVs for cooperative multiple-input multiple-output communications <cit.>. On the other hand, to fully exploit the benefits of these physical layer techniques, joint optimization of UAV deployment, trajectory planning and resource allocation have been considered for optimization of cell coverage <cit.>, throughput, energy <cit.>, and mission completion time <cit.>. In addition to dynamic trajectory planning <cit.>, optimal selection of waypoints has been considered in <cit.> to fulfill requirements of e.g. UAV-aided monitoring and inspection, or urban airspace regulation. While UAV trajectory planning usually assumes knowledge of user locations, the impact of uncertain user locations was recently investigated in <cit.>. However, other than few exceptions <cit.>, such literature usually does not consider DTN-based data ferrying. DTNs and UAVs for data ferrying were considered by other researchers which propose various different flight patterns. Common approaches include covering areas with a triangle grid for a flight pattern, using genetic algorithms for path planning <cit.>, distributed path planning algorithms mixed with task division <cit.> or specially consider in-transit nodes besides large clusters <cit.>. Considering multiple UAVs for data ferrying, Arafat et al. <cit.> propose a location-aided routing that exploits predictions of UAV movements for data forwarding. Energy consumption is a major factor not only for data transmissions but also for the UAV flight time, thus, Zobel et al. <cit.> optimize inter-cluster data delivery based on communication performance and energy consumption of the UAV itself. While Lieser et al. <cit.> presented a networked UAV simulation framework with some basic flight patterns, similar to the other research lack a large scale comparison of baseline flight patterns as well as advanced ones. Neither of the previously mentioned works combine data ferrying flight planning with secondary objectives such as providing cell services to ground nodes. § CONCLUSION We have proposed and achieved usage of UAVs for joint DTN-based data ferrying and cellular coverage. Our approach is based on alternating minimization for optimal-transport-based cell coverage, QAP-based optimal data ferrying, and a combination of both in a synchronous algorithm. The approach was verified numerically and against simpler baselines. For future work, one could exploit the mobility of users, as the current approach remains agnostic to user behavior. Battery levels could be considered, e.g. by adding another cluster for recharging UAVs. Lastly, one could investigate the effects of unknown user locations, also by means of mean-field limits. § APPENDIX Algorithm <ref> induces a monotonically decreasing cost J_𝐗(t)(𝐂^(n), ξ^(n)), since J_𝐗(t)(𝐂^(n+1), ξ^(n+1)) ≤ J_𝐗(t)(𝐂^(n+1), ξ^(n)) ≤ J_𝐗(t)(𝐂^(n), ξ^(n)) by (<ref>) and (<ref>) or (<ref>). By J_𝐗(t)(𝐂, ξ) ≥ 0 and monotone convergence, (J_𝐗(t)(𝐂^(n), ξ^(n)))_n ∈ℕ converges. First, there trivially exists N > 0 such that the achievable coverage is at least 1 + p_cover/2, e.g. using N=M and singleton clusters (in practice, N may be lower). As it takes at most diam(𝒳)/v time to reach a new cluster, choose T_rot = diam(𝒳)/v· k for any k > 1 to achieve a fraction 1 + p_cover/2·k-1/k of time-averaged coverage (as the fraction of time diam(𝒳)/v· (k-1) is spent stationary). Choosing k sufficiently large gives 1 + p_cover/2·k-1/k > p_cover as desired. For binary-jumping, it takes at most T_rot⌈log N ⌉ time until the next round of updates, after a message spawns in cluster i. During the new round, the message spreads to all clusters, since for any cluster j, there exist k_1, …, k_⌈log N ⌉∈{0, 1} with ∑_l=1^⌈log N ⌉ 2^k_l = j-i N by binary representation of j-i N, and hence there is a route of jumps performed by (possibly different) UAVs from i to j. A similar argument holds for TSP-based rotation. IEEEtran
http://arxiv.org/abs/2307.00334v1
20230701130611
A game-theoretic approach to indistinguishability of winning objectives as user privacy
[ "Rindo Nakanishi", "Yoshiaki Takata", "Hiroyuki Seki" ]
cs.GT
[ "cs.GT" ]
On the Impact of Sample Size in Reconstructing Graph Signals Baskaran Sripathmanathan, Xiaowen Dong, Michael Bronstein University of Oxford August 1, 2023 ======================================================================================= Game theory on graphs is a basic tool in computer science. In this paper, we propose a new game-theoretic framework for studying the privacy protection of a user who interactively uses a software service. Our framework is based on the idea that an objective of a user using software services should not be known to an adversary because the objective is often closely related to personal information of the user. We propose two new notions, -indistinguishable strategy (-IS) and objective-indistinguishability equilibrium (OIE). For a given game and a subset of winning objectives (or objectives in short), a strategy of a player is -indistinguishable if an adversary cannot shrink by excluding any objective O from as an impossible objective. A strategy profile, which is a tuple of strategies of all players, is an OIE if the profile is locally maximal in the sense that no player can expand her set of objectives indistinguishable from her real objective from the viewpoint of an adversary. We show that for a given multiplayer game with Muller objectives, both of the existence of an -IS and that of OIE are decidable. § INTRODUCTION Indistinguishability is a basic concept in security and privacy, meaning that anyone who does not have the access right to secret information cannot distinguish between a target secret data and other data. For example, a cryptographic protocol may be considered secure if the answer from an adversary who tries to attack the protocol is indistinguishable from a random sequence (computational indistinguishability) <cit.>. In the database community, k-anonymity has been frequently used as a criterion on privacy of a user's record in a database; a database is k-anonymous if we cannot distinguish a target record from at least k-1 records whose public attribute values are the same as those of the target record <cit.>. In this paper, we apply indistinguishability to defining and solving problems on privacy of a user who interacts with other users and/or software tools. Our basic framework is a multiplayer non-zero-some game played on a game arena, which is a finite directed graph with the initial vertex <cit.>. A game has been used as the framework of reactive synthesis problem <cit.>. A play in a game arena is an infinite string of vertices starting with the initial vertex and along edges in the game arena. To determine the result (or payoff) of a play, a winning objective O_p is specified for each player p. If the play satisfies O_p, then we say that the player p wins in this play. Otherwise, the player p loses. A play is determined when each player determines her strategy in the game. A strategy σ of a player p is called a winning strategy if the player p always wins by using σ, i.e., any play consistent with the strategy σ satisfies her winning objective regardless of the other players' strategies. One of the main concerns in game theory is to decide whether there is a winning strategy for a given player p and if so, to construct a winning strategy for p. Note that there may be more than one winning strategies for a player; she can choose any one among such winning strategies. In the literatures, a winning objective is a priori given as a component of a game. In this study, we regard a winning objective of a player is her private information because objectives of a user of software services are closely related to her private information. For example, users of e-commerce websites may select products to purchase depending on their preference, income and health condition, etc., which are related to private information of the users. Hence, it is natural for a player to choose a winning strategy that maximizes the indistinguishability of her winning objective from the viewpoint of an adversary who may observe the play and recognize which players win the game. For a subset O of winning objectives which a player p wants to be indistinguishable from one another, we say that a strategy of p is O-indistinguishable if an adversary cannot make O smaller as the candidate set of winning objectives. The paper discusses the decidability of some problems related to O-indistinguishability. Another important problem in game theory is to find a good combination of strategies of all players, which provides a locally optimal play. A well-known criterion is Nash equilibrium. A combination of strategies (called a strategy profile) is a Nash equilibrium if any player losing the game in that strategy profile cannot make herself a winner by changing her strategy alone. This paper introduces objective-indistinguishability equilibrium (OIE) as a criterion of local optimality of a strategy profile; a strategy profile is OIE if and only if no player can extend the indistinguishable set of winning objectives by changing her strategy alone. The paper also provides the decidability results on OIE. *Related work As already mentioned, this paper focuses on multiplayer turn-based non-zero-sum games. There is a generalization of games where each player can only know partial information on the game, which is called an imperfect information game<cit.>. While the indistinguishability proposed in this paper shares such restricted observation with imperfect information games, the large difference is that we consider an adversary who is not a player but an individual who observes partial information on the game while players themselves may obtain only partial information in imperfect information games. There are many privacy notions and a vast amount of literatures studying privacy issues. Among them, k-anonymity is one of the well-known notions originated in the database community. A database D is k-anonymous <cit.> if for any record r in D, there are at least k-1 records different from r such that the values of quasi-identifiers of r and these records are the same. Here, a set of quasi-identifiers is a subset of attributes that can `almost' identify the record such as {zip-code, birthday, income}. Hence, if D is k-anonymous, an adversary knowing the quasi-identifiers of some user u cannot identify the record of u in D among the k records with the same values of the quasi-identifiers. Methods for transforming a database to the one satisfying k-anonymity have been investigated <cit.>. Also, refined notions such as ℓ-diversity <cit.> and t-closeness <cit.> have been proposed by considering the statistical distribution of the attribute values. However, these notions suffer from so called non-structured zero and mosaic effect. Actually, it is known that there is no way of protecting perfect privacy from an adversary who can use an arbitrary external information except the target privacy itself. The notion of ε-differential privacy where ε>0 was proposed to overcome the weakness of the classical notions of privacy. In a nutshell, a query Q to a database D is ε-differentially private (abbreviated as ε-DP) <cit.> if for any person u, the probability that we can infer whether the information on u is contained in D or not by observing the result of Q(D) is negligible (very small) in terms of ε. (Also see <cit.> as comprehensive tutorials.) As the privacy protection of individual information used in data mining and machine learning is becoming a serious social problem (see <cit.> for example), methods of data publishing that guarantees ε-DP have been extensively studied <cit.>. Quantitative information flow (abbreviated as QIF) <cit.> is another way of formalizing privacy protection or information leakage. QIF of a program P is the mutual information of the secret input X and the public output Y of the program P in the sense of Shannon theory where the channel between X and Y is a program which has logical semantics. Hence, QIF analysis uses not only the calculation of probabilities but also program analysis such as type inference <cit.> and symbolic execution. We have mentioned a few well-known approaches to formally modeling privacy protection in software systems; however, these privacy notions, even QIF that is based on the logical semantics of a program, share the assumption that private information is a static value or a distribution of values. In contrast, our approach assumes that privacy is a purpose of a user's behavior. The protection of this kind of privacy has not been studied to the best of our knowledge. As an extension of rational synthesis, Kupferman and Leshkowitz have introduced the synthesis problem of privacy preserving systems <cit.>; the problem is for given multivalued LTL formulas representing secrets as well as an LTL formula representing a specification, to decide whether there is a reactive program that satisfies the specification while keeping the values of the formulas representing secrets unknown. This study treats the secrets as values as in the previous studies, and the approach is very different from ours. While we adopt Nash equilibrium, there are other criteria for local optimality of strategy profiles, namely, secure equilibrium (SE) <cit.> and doomsday equilibrium (DE) <cit.>. SE is a strategy profile such that no player can improve her payoff or punish any other player without loss of her own payoff by changing only her strategy. SE is used for a verification of component-based systems where each component has its own objective. DE is a strategy profile such that all players are winners and each player can make all players lose as retaliation when she becomes a loser because some other players change their strategies. SE and DE are secure in the sense that no player is punished by other player(s) and not directly related to user privacy. *Outline In Section <ref>, we define some notions and notations on multiplayer turn-based deterministic games used in subsequent sections. Moreover, in Section <ref>, we define an (_1,…,_n)-Nash equilibrium (NE) as a strategy profile which is simultaneously a NE for all objective profiles _1,…,_n. We show that whether there exists an (_1,…,_n)-NE is decidable in Theorem 2, which will be used in Section <ref>. In Section <ref>, we propose two new notions, namely -indistinguishable strategy (-IS) and objective-indistinguishability equilibrium (OIE). -IS is a strategy such that an adversary cannot shrink the set of candidate objectives of a player. OIE is a strategy profile such that no player can expand her own set of candidate objectives. In Section <ref>, we show that for a given multiplayer game with Muller objectives, both the existence of an -IS and that of OIE are decidable. In Section <ref>, we give a conclusion of this paper. § PRELIMINARIES A game arena is a tuple G =, where * P is a finite set of players, * V is a finite set of vertices, * (V_p)_p ∈ P is a partition of V, namely, V_i ∩ V_j = ∅ for all i≠ j (i,j ∈ P) and ⋃_p ∈ P V_p = V, * v_0 ∈ V is the initial vertex, and * E ⊆ V × V is a set of edges. As defined later, a vertex in V_p is controlled by a player p, i.e., when a play is at a vertex in V_p, the next vertex is selected by player p. This type of games is called turn-based. There are other types of games, concurrent and stochastic games. In a concurrent game <cit.>, each vertex may be controlled by more than one (or all) players. In a stochastic game <cit.>, each vertex is controlled by a player or a special entity nature who selects next nodes according to a probabilistic distribution for next nodes given as a part of a game arena. Moreover, a strategy of a player selects a next node stochastically. In this paper, we consider only deterministic turn-based games. *Play and history An infinite string of vertices v_0 v_1 v_2 ⋯ (v_i ∈ V, i ≥ 0) starting from the initial vertex v_0 is a play if (v_i,v_i+1) ∈ E for all i ≥ 0. A history is a non-empty (finite) prefix of a play. The set of all plays is denoted by and the set of all histories is denoted by . We often write a history as hv where h ∈∪{ε} and v ∈ V. For a player p ∈ P, let _p = { hv ∈| v ∈ V_p }. That is, _p is the set of histories ending with a vertex controlled by player p. For a play ρ=v_0v_1v_2⋯∈, we define (ρ) = { v∈ V|∀ i≥0. ∃ j≥ i. v_j=v}. *Strategy For a player p ∈ P, a strategy of p is a function σ_p: _p → V such that (v, σ_p(hv)) ∈ E for all hv ∈_p. At a vertex v∈ V_p, player p chooses σ_p(hv) as the next vertex according to her strategy σ_p. Note that because the domain of σ_p is Hist_p, the next vertex may depend on the whole history in general. Let Σ^p_ denote the set of all strategies of p. A strategy profile is a tuple = (σ_p)_p ∈ P of strategies of all players, namely σ_p ∈Σ^p_ for all p ∈ P. Let Σ_ denote the set of all strategy profiles. For a strategy profile ∈Σ_ and a strategy σ'_p ∈Σ^p_ of a player p∈ P, let [p ↦σ'_p] denote the strategy profile obtained from by replacing the strategy of p in with σ'_p. We define the function _:Σ_→ as _((σ_p)_p ∈ P) = v_0 v_1 v_2 ⋯ where v_i+1 = σ_p(v_0⋯ v_i) for all i ≥ 0 and for p∈ P with v_i ∈ V_p. We call the play _() the outcome of . We also define the function ^p_:Σ^p → 2^ for each p ∈ P as ^p_(σ_p) = { v_0 v_1 v_2 ⋯∈|v_i ∈ V_p ⇒ v_i+1=σ_p(v_0⋯ v_i) for all i≥ 0}. A play ρ∈^p_(σ_p) is called a play consistent with the strategy σ_p of player p. By definition, for a strategy profile = (σ_p)_p∈ P∈Σ_, it holds that ⋂_p∈ P^p_(σ_p)={_()}. *Objective In this paper, we assume that the result that a player obtains from a play is either a winning or a losing. Since we are considering non-zero-sum games, one player's winning does not mean other players' losing. Each player has her own winning condition over plays, and we model the condition as a subset O of plays; i.e., the player wins if the play belongs to the subset O. We call the subset O⊆ the objective of that player. In this paper, we focus on the following important classes of objectives: Let U⊆ V be a subset of vertices, c:V→ℕ be a coloring function, (F_k,G_k)_1≤ k≤ l be pairs of sets F_k,G_k⊆ V and ⊆ 2^V be a subset of subsets of vertices. We will use U, c, (F_k,G_k)_1≤ k≤ l and as finite representations for specifying an objective as follows: * Büchi objective: Büchi(U)={ρ∈|(ρ) ∩ U ≠∅}. * Co-Büchi objective: Co`-Büchi(U)={ρ∈|(ρ) ∩ U = ∅}. * Parity objective: Parity(c) = {ρ=v_0v_1v_2⋯∈|max({c(v_i) | i≥ 0}) is even}. * Rabin objective: Rabin((F_k,G_k)_1≤ k≤ l)= {ρ∈| 1≤∃ k≤ l. (ρ)∩ F_k=∅∧(ρ) ∩ G_k ≠∅}. * Streett objective: Streett((F_k,G_k)_1≤ k≤ l)= {ρ∈| 1≤∀ k≤ l. (ρ)∩ F_k≠∅∨(ρ) ∩ G_k=∅}. * Muller objective: Muller()={ρ∈|(ρ)∈}. Note that each objective defined in Definition <ref> is also a Muller objective: For example, (U)=({I⊆ V| I∩ U ≠∅}). We define the description length of a Muller objective () for ⊆V is |V|·||, because each element of , which is a subset of V, can be represented by a bit vector of length |V|. By Ω⊆ 2^, we refer to a certain class of objectives. For example, Ω = {Büchi(U) | U ⊆ V }⊆ 2^Play is the class of Büchi objectives. An objective profile is a tuple = (O_p)_p ∈ P of objectives of all players, namely O_p ⊆ for all p ∈ P. For a strategy profile ∈Σ_ and an objective profile = (O_p)_p ∈ P, we define the set _(,) ⊆ P of winners as _(,) = { p ∈ P |_() ∈ O_p }. That is, a player p is a winner if and only if _() belongs to the objective O_p of p. If p ∈_(, ), we also say that p wins the game with (by the strategy profile ). Note that it is possible that there is no player who wins the game or all the players win the game. In this sense, a game is non-zero-sum. We abbreviate Σ^p_,Σ_,^p_,_ and _ as Σ^p,Σ,^p, and , respectively, if is clear from the context. *Winning strategy For a game arena , a player p ∈ P and an objective O_p ⊆, a strategy σ_p ∈Σ^p of p such that ^p(σ_p)⊆ O_p is called a winning strategy of p for and O_p because if p takes σ_p as her strategy then she wins against any combination of strategies of the other players. (Recall that ^p(σ_p) is the set of all plays consistent with σ_p.) For a game arena and a player p∈ P, we define the set ^p_ of objectives permitting a winning strategy as ^p_={O |∃σ_p ∈Σ^p_. ^p_(σ_p)⊆ O}. For a player p, O∈^p_ means that p has a winning strategy for and O. On the existence of a winning strategy for a Muller objective, the following theorem is known. Let = be a game arena and O_p⊆ be a Muller objective of p∈ P. Deciding whether there exists a winning strategy of p for O_p is 𝖯-complete. For such non-zero-sum multiplayer games as considered in this paper, we often use Nash equilibrium, defined below, as a criterion for a strategy profile to be locally optimal. *Nash equilibrium Let ∈Σ be a strategy profile and =(O_p)_p∈ P be an objective profile. A strategy profile is called a Nash equilibrium (NE) for if it holds that ∀ p ∈ P. ∀σ_p ∈Σ^p. p∈([p↦σ_p],) ⇒ p ∈(,). Intuitively, is a NE if every player p cannot improve the result (from losing to winning) by changing her strategy alone. For a strategy profile ∈Σ, we call a strategy σ_p∈Σ^p such that p∉(,)∧ p∈([p↦σ_p],) a profitable deviation of p from . Hence, is a NE if and only if no player has a profitable deviation from . Because p∈(,) is equivalent to () ∈ O_p, a strategy profile ∈Σ is a NE for if and only if ∀ p∈ P. ∀σ_p∈Σ^p. ([p↦σ_p])∈ O_p ⇒()∈ O_p. We write Condition (<ref>) as (,). Below we define an extension of NE that is a single strategy profile simultaneously satisfying the condition of NE for more than one objective profiles. We can prove that the existence of this extended NE is decidable (Theorem <ref>), and later we will reduce some problems to the existence checking of this type of NE. For a game arena = and objective profiles _1,…,_n, a strategy profile ∈Σ is called an (_1,…,_n)-Nash equilibrium if (,_j) for all 1≤ j≤ n. Let = be a game arena and _j=(O_p^j)_p∈ P (1≤ j ≤ n) be objective profiles over Muller objectives. Deciding whether there exists an (_1,…,_n)-NE is decidable. A proof of this theorem is given in the Appendix. § INDISTINGUISHABLE STRATEGY AND RELATED EQUILIBRIUM In this section, we propose two new notions concerning on the privacy of a player: indistinguishable strategy and objective-indistinguishability equilibrium. We first define the set of possible objectives of a player in the viewpoint of an adversary that can observe restricted information on a game, a play and its result (i.e., which players win). We assume that an adversary guesses objectives of players from the three types of information: a play (𝗉), a game arena (𝗀) and a set of winners (𝗐) of the play. We use a word ∈{,,,} to represent a type of information that an adversary can use. For example, an adversary guesses objectives from a play and winners when =. We do not consider the cases where is a singleton because an adversary cannot guess anything from such information. In either case, we implicitly assume that an adversary knows the set V of vertices of the game arena. Let p ∈ P be a player, O_p ⊆ be an objective of p and Ω⊆ 2^ be one of the classes of objectives. We define the function :Σ→ 2^Ω as follows, which maps a strategy profile ∈Σ to the set of objectives of p that an adversary guesses: _Ω,() = { O⊆ V^ω|[t] (()∈ O p∈(,)) (()∉ O p∉(,)) }, ^p,O_p_Ω,() = { O∈Ω|[t] (p∈(,) O≠∅) (p∉(,) O∉^p) }, ^p,O_p_Ω,() = { O∈Ω|()∈ O (()∉ O O∉^p) }, ^p,O_p_Ω,() = { O∈Ω|[t] (()∈ O p∈(,)) (()∉ O p∉(,) O∉^p) }, where is any objective profile in which the objective of p is O_p. (Note that for a given whether p∈(, ) or not does not depend on objectives of the players other than p and hence we can use an arbitrary containing O_p.) The definitions of are based on the following ideas. When =, we assume that an adversary can observe the play and the set of winners but he does not know the game arena. The adversary can infer that the play () he observed belongs to the objective of a player p if the adversary knows that p is a winner, and () does not belong to the objective of p if p is not a winner. Note that the adversary does not know the real objective O_p of player p. For the adversary, any O ⊆ V^ω satisfying () ∈ O is a candidate of the objective of player p when p is a winner. Similarly, any O⊆ V^ω satisfying () ∉O is a candidate objective of p when p is not a winner. An adversary does not know the game arena because =, that is, he does not know the set of edges in the arena. Therefore, the candidate objective O cannot be restricted to a subset of plays (i.e., infinite strings of vertices along the edges in the game arena), but O can be an arbitrary set of infinite strings of the vertices consistent with the information obtained by the adversary. When =, an adversary cannot observe the play, but he knows the game arena and can observe the set of winners. If p is a winner, the adversary can infer that p has a strategy σ_p such that ^p(σ_p) ∩ O_p ≠∅. Because there exists such a strategy σ_p for all O_p other than ∅, he can remove only ∅ from the set of candidates for p's objective. On the other hand, if p is a loser, the adversary can infer that p has no winning strategy for O_p because we assume that every player takes a winning strategy for her objective when one exists. Therefore, when p loses, the adversary can narrow down the set of candidates for p's objective to the set of objectives without a winning strategy. The definition where = can be interpreted in a similar way. Note that we have _Ω,() = _Ω,() ∩_Ω,∩_Ω,. Since p∈(, ) is equivalent to ()∈ O_p, the above definitions can be rephrased as follows: ^p,O_p_Ω,() = { O⊆ V^ω|()∈ (O∩ O_p)∪(O∩O_p) }, ^p,O_p_Ω,() = { O∈Ω|[t] (O∈^p ⇒()∈ O_p) (O=∅⇒()∉ O_p)}, ^p,O_p_Ω,() = { O∈Ω| O∈^p ⇒()∈ O }, ^p,O_p_Ω,() = { O∈Ω|[t] ()∈ (O ∩ O_p)∪(O∩O_p) (O∈^p ⇒()∈ O ∩ O_p) }. The reader may wonder why O_p appears in this (alternative) definition in spite of the assumption that the adversary does not know O_p. The condition () ∈ O_p (or ∉O_p) only means that the adversary knows whether p is a winner (or a loser) without knowing O_p itself. Figure <ref> shows a 1-player game arena =({1},V,(V),v_0,E) where V={v_0,v_1,v_2} and E={(v_0,v_1),(v_0,v_2),(v_1,v_1),(v_2,v_2)}. We specify a Büchi objective by a set of accepting states, e.g., let ⟨ v_1 ⟩ denote ({v_1})={ρ∈ V^ω|(ρ) ∩{v_1}≠∅}. In this example, we assume the objective of player 1 is ⟨⟩=∅⊆. Therefore, player 1 always loses regardless of her strategy. There are only two strategies σ_1 and σ_2 of player 1. The strategy σ_1 takes the vertex v_1 as the next vertex at the initial vertex v_0 and then keeps looping in v_1. On the other hand, the strategy σ_2 takes v_2 at v_0 and then keeps looping in v_2. Let σ_1 be the strategy player 1 chooses. We have the play ρ=(σ_1) =v_0v_1v_1v_1⋯. We assume that an adversary knows that the objective of player 1 is a Büchi objective. Then, for each type of information ∈{,,,}, ^1,∅_,(σ_1) becomes as follows: * If =, then an adversary can deduce that v_1 is not an accepting state because he knows that (v_0v_1v_1⋯) = { v_1 } and player 1 loses. Therefore, we have ^1,∅_,(σ_1)= {⟨⟩,⟨ v_0 ⟩, ⟨ v_2 ⟩, ⟨ v_0,v_2 ⟩}. Note that in this game arena, there is no play passing v_0 infinitely often, and thus ⟨⟩ and ⟨ v_0 ⟩ (resp. ⟨ v_2 ⟩ and ⟨ v_0,v_2 ⟩) are equivalent actually. However, because an adversary does not know the game arena when =, he should consider every infinite string over V would be a play and thus ⟨⟩ and ⟨ v_0 ⟩ are different for him when =. In the other cases where an adversary knows the game arena, he also knows e.g. ⟨⟩ and ⟨ v_0 ⟩ are equivalent and thus he would consider Ω = {⟨⟩,⟨ v_1 ⟩,⟨ v_2 ⟩,⟨ v_1,v_2⟩}. * If =, then an adversary can deduce that neither v_1 nor v_2 is an accepting state because player 1 loses in spite of the fact that there are strategies that pass through v_1 or v_2 infinitely often. Therefore, ^1,∅_,(σ_1)={⟨⟩}. That is, an adversary can infer the complete information. * If =, then an adversary can deduce that ⟨ v_2 ⟩ does not belong to ^1,∅_,(σ_1) because player 1 did not take σ_2 to pass through v_2 infinitely often. That is, if ⟨ v_2 ⟩ were the objective of player 1, then it meant she chose losing strategy σ_1 instead of winning strategy σ_2, which is unlikely to happen. Therefore, we have ^1,∅_,(σ_1)= {⟨⟩, ⟨ v_1 ⟩, ⟨ v_1,v_2 ⟩}. * If =, we have ^1,∅_,(σ_1)= ⋂_∈{,,}^1,∅_,(σ_1) = {⟨⟩}. *Ø-indistinguishable strategy Let = be a game arena, σ_p ∈Σ^p be a strategy of p∈ P, Ω⊆2^ be one of the classes of objectives defined in Definition <ref>, O_p ∈Ω be an objective of p and ∈{,,,} be a type of information that an adversary can use. For any set ⊆2^ of objectives such that ⊆⋂_∈Σ([p↦σ_p]), we call σ_p an -indistinguishable strategy (-IS) of p (for O_p and ). Intuitively, when a player takes an -IS as her strategy, an adversary cannot narrow down the set of candidates of p's objective from by the following reason. By definition, any objective O belonging to also belongs to ([p ↦σ_p]) for the combination of σ_p and any strategies of the players other than p. This means that such an objective O is possible as the objective of p from the viewpoint of the adversary who can use a type of information specified by . If an -IS σ_p∈Σ^p is a winning strategy of p, then we call σ_p a winning -IS of p. Figure <ref> shows a 1-player game arena =({1},V,(V),v_0,E) where V={v_0,v_1,v_2} and E= { (v_0,v_0),(v_0,v_1),(v_1,v_0),(v_1,v_2),(v_2,v_0) }. We use the same notation of Büchi objectives as Example <ref>, and in this example the objective of player 1 is ⟨ v_0 ⟩⊆. We assume that an adversary knows that the objective of player 1 is a Büchi objective. In this example, we focus on =. We examine the following three strategies of player 1, all of which result in player 1's winning. * Let σ_1∈Σ^1 be a strategy of player 1 such that (σ_1)=v_0v_0v_0⋯. Since player 1 wins, an adversary can deduce that v_0 must be an accepting state. Therefore, ^1,⟨ v_0 ⟩_,(σ_1) ={⟨ v_0 ⟩,⟨ v_0,v_1⟩,⟨ v_0,v_2 ⟩,⟨ v_0,v_1,v_2 ⟩}. For all ⊆^1,⟨ v_0 ⟩_,(σ_1), σ_1 is an -IS (for ⟨ v_0 ⟩ and =). * Let σ_2∈Σ^1 be a strategy of player 1 such that (σ_1)=v_0v_1v_0v_1⋯. In a similar way as the above case, an adversary can deduce that v_0 or v_1 (or both) must be an accepting state. Therefore, ^1,⟨ v_0 ⟩_,(σ_2) ={⟨ v_0 ⟩, ⟨ v_1 ⟩, ⟨ v_0,v_1⟩, ⟨ v_1,v_2 ⟩, ⟨ v_2,v_0 ⟩, ⟨ v_0,v_1,v_2 ⟩}. For all ⊆^1,⟨ v_0 ⟩_,(σ_2), σ_2 is an -IS. * Let σ_3∈Σ^1 be a strategy of player 1 such that (σ_3)=v_0v_1v_2v_0v_1v_2⋯. In a similar way as the above cases, an adversary can deduce that at least one of v_0, v_1, and v_2 must be an accepting state. Therefore, ^1,⟨ v_0 ⟩_,(σ_3) ={⟨ v_0 ⟩,⟨ v_1 ⟩, ⟨ v_2 ⟩, ⟨ v_0,v_1⟩,⟨ v_1,v_2 ⟩, ⟨ v_2,v_0 ⟩, ⟨ v_0,v_1,v_2 ⟩}. For all ⊆^1,⟨ v_0 ⟩_,(σ_3), σ_3 is an -IS. In the above example, ^1,⟨ v_0 ⟩_,(σ_1) ⊂^1,⟨ v_0 ⟩_,(σ_2) ⊂^1,⟨ v_0 ⟩_,(σ_3). Hence, the strategy σ_3 is the most favorable one for player 1 with regard to her privacy protection. This observation motivates us to introduce a new concept of equilibrium defined below. *Objective-indistinguishability equilibrium Let (O_p)_p∈ P be an objective profile and ∈{, , , } be a type of information that an adversary can use. We call a strategy profile ∈Σ such that ∀ p∈ P. ∀σ_p ∈Σ^p. ([p↦σ_p])⊆() an objective-indistinguishability equilibrium (OIE) for . If a strategy profile is an OIE for , no player can expand her () by changing her strategy alone. For a strategy profile ∈Σ, we call a strategy σ_p∈Σ^p such that ([p↦σ_p])⊈() a profitable deviation for OIE. If an OIE is an NE as well, we call an objective-indistinguishability Nash equilibrium (OINE). Figure <ref> shows a 3-player game arena =(P,V,(V_p)_p∈ P,v_0,E) where P={0,1,2}, V={v_0,v_1,v_2}, V_p={v_p} (p∈ P) and E= { (v_i,v_j) | i,j∈ P,i≠ j }. The objective of player p∈ P is ⟨ v_p ⟩, and hence the objective profile is =(⟨ v_0 ⟩,⟨ v_1 ⟩, ⟨ v_2 ⟩). Let σ_p∈Σ^p (p∈ P) be the strategies defined as follows: σ_0(hv_0) = v_1, σ_1(hv_1) = v_0, and σ_2(hv_2) = v_0 for every h∈∪{ε}. Let = (σ_1,σ_2,σ_3). It holds that ()=v_0v_1v_0v_1⋯ and (,)={0,1}. * For =, is not an OIE because there exists a profitable deviation σ'_1∈Σ^1 for OIE such that σ'_1(h)=v_2 for all h∈_1. While () does not visit v_2, player 1 can make the outcome visit v_2 infinitely often by changing her strategy from σ_1 to σ'_1. As a result, ^1,⟨ v_1 ⟩_,()= {⟨ v_0 ⟩, ⟨ v_1 ⟩, ⟨ v_0,v_1 ⟩, ⟨ v_1,v_2 ⟩, ⟨ v_2,v_0 ⟩, ⟨ v_0,v_1,v_2 ⟩} and ^1,⟨ v_1 ⟩_,([1↦σ'_1]) =^1,⟨ v_1 ⟩_,() ∪{⟨ v_2 ⟩}. * For =, is an OIE by the following reason: In general, when =, by definition ^p,O_p_Ω,() = Ω∖{∅} if p wins and ^p,O_p_Ω,() = ^p otherwise. (That is, an adversary cannot exclude any objective other than ∅ from candidate objectives of player p when p wins, while he can exclude objectives in ^p when p loses.) In this example, ^0,⟨ v_0 ⟩_Ω,() = ^1,⟨ v_1 ⟩_Ω,() = Ω∖{∅} since players 0 and 1 are winners. They have no profitable deviation for OIE, because each of them cannot become a loser unless other players change their strategies and thus ^p,⟨ v_p ⟩_Ω,([p↦σ'_p]) (p∈{0,1}) still equals Ω∖{∅} for any strategy σ'_p (p∈{0,1}). For player 2, ^2,⟨ v_2 ⟩_Ω,() = ^2 (= {⟨⟩, ⟨ v_2 ⟩}).[ In this example, player 2 can visit v_i (i=0,1) infinitely often by choosing v_i as the next vertex at v_2. Therefore, an objective such that v_0 or v_1 is an accepting state is winnable and hence ^2 = Ω∖{⟨⟩, ⟨ v_2 ⟩}. ] She also has no profitable deviation for OIE, because she cannot become a winner unless player 0 or 1 changes their strategies and thus ^2,⟨ v_2 ⟩_Ω,([2↦σ'_2]) still equals ^2 for any her strategy σ'_2. * For =, is not an OIE because for σ'_1∈Σ^1 defined above, ^1,⟨ v_1 ⟩_,()= {⟨⟩,⟨ v_0 ⟩, ⟨ v_1 ⟩, ⟨ v_0,v_1 ⟩, ⟨ v_1,v_2 ⟩, ⟨ v_2,v_0 ⟩, ⟨ v_0,v_1,v_2 ⟩} and ^1,⟨ v_1 ⟩_,([1↦σ'_1]) =^1,⟨ v_1 ⟩_,() ∪{⟨ v_2 ⟩}. * For =, is not an OIE because σ'_1∈Σ^1 is again a profitable deviation for OIE. § DECIDABILITY RESULTS Let = (P, V, (V_p)_p∈ P, v_0, E) be a game arena and = (O_p)_p∈ P be an objective profile over Muller objectives. For a subset ⊆ of Muller objectives, whether there exists an -IS of p for O_p is decidable. Moreover, the problem is decidable in polynomial time when = or when = and does not contain ∅. First we consider the case where =. We can show that a strategy σ_p∈Σ^p is an -IS of p for O_p, i.e. ⊆⋂_∈Σ^p,O_p_([p↦σ_p]), if and only if ^p(σ_p)⊆⋂_O∈((O ∩ O_p) ∪ (O∩O_p)) ∩⋂_O∈∩^p (O ∩ O_p). This can be shown as follows:[ We have confirmed this equivalence using a proof assistant software Coq. The proof script is available at <https://github.com/ytakata69/proof-indistinguishable-objectives>. ] Assume that ⊆⋂_∈Σ^p,O_p_([p↦σ_p]). Then, every O∈ should belong to ^p,O_p_([p↦σ_p]) for every ∈Σ. Then by the definition of ^p,O_p_, every O∈ and every ∈Σ should satisfy ([p↦σ_p])∈ (O∩ O_p)∪(O∩O_p) and whenever O∈^p, ([p↦σ_p])∈ O∩ O_p. Because ^p(σ_p) = {([p↦σ_p]) |∈Σ}, we have Condition (<ref>). The reverse direction can be proved similarly. Condition (<ref>) means that σ_p is a winning strategy of p for the objective equal to the right-hand side of the containment in Condition (<ref>). Because the class of Muller objectives is closed under Boolean operations, the right-hand side of Condition (<ref>) is also a Muller objective. Since deciding the existence of a winning strategy for a Muller objective is decidable as stated in Theorem <ref>, deciding the existence of an -IS is also decidable. (In this computation, deciding the existence of a winning strategy is used both for deciding whether O∈^p, i.e., O has a winning strategy, and for deciding whether the right-hand side of Condition (<ref>) has a winning strategy.) For the other cases, we can similarly show that σ_p∈Σ^p is an -IS of p for O_p if and only if the following conditions (<ref>), (<ref>), and (<ref>) hold when = ,,, respectively: ^p(σ^p) ⊆⋂_O∈((O ∩ O_p) ∪ (O∩O_p)), ^p(σ^p) ⊆⋂_O∈∩^p O_p ∩⋂_O∈∩{∅}O_p, ^p(σ^p) ⊆⋂_O∈∩^p O. Therefore in any cases, we can reduce the problem of deciding the existence of an -IS into the one deciding the existence of a winning strategy for a Muller objective. Since (_1)∩(_2)=(_1∩_2), the description lengths of the right-hand sides of Condition (<ref>) and Condition (<ref>) with not containing ∅ are not greater than the sum of those of and O_p.[ As an exception, if ∩^p=∅ (resp. ∩ (^p∪{∅}) = ∅), then the right-hand side of Condition (<ref>) (resp. (<ref>)) equals the set of all plays, which equals (2^V). In these cases, every strategy satisfies Conditions (<ref>) and (<ref>) and thus we can trivially decide the existence of an -IS. ] Since deciding the existence of a winning strategy for a Muller objective is solvable in polynomial time by Theorem <ref>, deciding the existence of an -IS when = or when = and does not contain ∅ is also solvable in polynomial time. When = or , we cannot guarantee that deciding the existence of an -IS is solvable in polynomial time because the complementation of a Muller objective in the right-hand sides of Conditions (<ref>) and (<ref>) may make the description length of the resultant objective O(|V|· 2^|V|) even when the description lengths of and are small. Similarly, when =, ∩^p=∅ and ∅∈, we cannot guarantee that deciding the existence of an -IS is solvable in polynomial time because the right-hand side of Condition (<ref>) becomes O_p. Let = (P, V, (V_p)_p∈ P, v_0, E) be a game arena and = (O_p)_p∈ P be an objective profile over Muller objectives. For a subset ⊆ of Muller objectives, whether there exists a winning -IS of p for O_p is decidable in polynomial time. By definition, σ_p∈Σ^p is a winning strategy of p for O_p if and only if ^p(σ_p)⊆ O_p. Therefore, by replacing the right-hand side of each of Conditions (<ref>)–(<ref>) with the intersection of it and O_p, we can decide the existence of a winning -IS in the same way as the proof of Theorem <ref>. Namely, σ_p is a winning -IS of p for O_p if and only if ^p(σ^p) ⊆ O_p ∩⋂_O∈ O (when = or ), ^p(σ^p) ⊆ O_p ∩⋂_O∈∩{∅}O_p (when =), ^p(σ^p) ⊆ O_p ∩⋂_O∈∩^p O (when =). When =, or , since the right-hand sides of Conditions (<ref>) and (<ref>) do not require complementation, the description lengths of them are not greater than the sum of the description lengths of and O_p. When =, the right-hand side of Condition (<ref>) is O_p ∩O_p=∅ if ∅∈, and O_p otherwise, and hence the description length of it is not greater than the description length of O_p. Therefore, in the same way as the cases where = or = and ∅∉ in Theorem <ref>, deciding the existence of a winning -IS is also solvable in polynomial time for any ∈{,,,}. For a game arena and an objective profile =(O_p)_p∈ P over Muller objectives, whether there exists an OIE for and is decidable. Condition (<ref>) in Definition <ref> is equivalent to the following condition: ∀ p∈ P. ∀σ_p ∈Σ^p. ∀ O ∈Ω. O ∈([p↦σ_p]) ⇒ O∈(). First we consider the case where =. By the definition of _Ω,, Condition (<ref>) for = is equivalent to the following condition: ∀ p∈ P. ∀σ_p ∈Σ^p. ∀ O ∈Ω. if O∈^p, then (([p↦σ_p])∈ O∩ O_p⇒()∈ O∩ O_p); otherwise, (([p↦σ_p])∈(O∩ O_p)∪(O∩O_p) ⇒() ∈ (O∩ O_p)∪(O∩O_p)). For O∈ and p∈ P, let R^O_p be the objective defined as follows: R_p^O= O∩ O_p O∈^p, (O∩ O_p) ∪ (O∩O_p) O∉^p. Let _O = (R^O_p)_p∈ P be the objective profile consisting of these objectives. Then, Condition (<ref>) can be written as ∀ O∈. (,_O). Therefore, this theorem holds by Theorem <ref>. For the other cases, the implication inside the scope of the three universal quantifiers in Condition (<ref>) is equivalent to the following implications: when = ([p↦σ_p])∈ (O∩ O_p)∪(O∩O_p) ⇒()∈ (O∩ O_p)∪(O∩O_p), when = if O∈^p, then ([p↦σ_p])∈ O_p⇒()∈ O_p; if O=∅, then ([p↦σ_p])∈O_p⇒()∈O_p, when = if O∈^p, then ([p↦σ_p])∈ O⇒()∈ O. These conditions can be written as the combination of NE in the same way as the case where =. Therefore, this theorem also holds for ∈{,,} by Theorem <ref>. For a game arena and an objective profile =(O_p)_p∈ P over Muller objectives, whether there exists an OINE for and is decidable. By the proof of Theorem <ref>, an OINE ∈Σ must satisfy the condition ∀ O∈. (,_O). Moreover, must also satisfy (,) because is a NE. Therefore, is an ((_O)_O∈,)-NE and thus, this theorem holds by Theorem <ref>. § CONCLUSION We proposed two new notions -indistinguishable strategy (-IS) and objective-indistinguishability equilibrium (OIE). Then, we proved that whether there exists an -IS and an OIE over Muller objectives are both decidable. To prove this, we defined an (_1,…,_n)-Nash equilibrium as a strategy profile which is simultaneously a nash equilibrium for all objective profiles _1,…,_n, and proved that whether there exists an (_1,…,_n)-Nash equilibrium is decidable. In this paper, we assume that an adversary is not a player but an individual who observes partial information on the game. He cannot directly affect the outcome of the game by choosing next vertices. We can consider another setting where an adversary is also a player. His objective is minimizing the set of candidate objectives of other players and he takes a strategy for achieving the objective. Considering a framework on this setting, by extending the results shown in this paper, is future work. plain § APPNEDIX An objective O⊆ 2^ is prefix-independent if ρ∈ O ⇔ hρ∈ O for every play ρ∈ O and history h ∈. The objectives defined in Definition <ref> are prefix-independent because (ρ) = (hρ) for every play ρ and history h. For a game arena = and v∈ V, let (,v)= (P,V,(V_p)_p∈ P,v,E) be the game arena obtained from by replacing the initial vertex v_0 of with v. For a game arena = with an objective profile =(O_p)_p∈ P, we define the game arena _p=({p,-p},V,(V_p,V_p),v_0,E) with the objective profile (O_p,O_p) for each p∈ P. The game arena _p with the objective profile (O_p,O_p) is a 2-player zero-sum game such that vertices and edges are the same as and the player -p is formed by the coalition of all the players in P ∖{p}. The following proposition is a variant of <cit.> adjusted to the settings of this paper. Let = be a game arena and =(O_p)_p∈ P be an objective profile such that O_p is prefix-independent for all p. Then, a play ρ=v_0v_1v_2⋯∈ is the outcome of some NE ∈Σ for , i.e., ρ=(), if and only if ∀ p ∈ P. ∀ i≥0. (v_i∈ V_p ∧ O_p ∈^p_(,v_i)) ⇒ v_iv_i+1v_i+2⋯∈ O_p. (⇒) We prove this direction by contradiction. Assume that a play ρ=v_0v_1v_2⋯∈ is the outcome of a NE =(σ_p)_p∈ P∈Σ for and there exist p∈ p and i≥ 0 with v_i ∈ V_p ∧ O_p ∈^p_(,v_i) such that v_i v_i+1 v_i+2⋯∉ O_p. By the prefix-independence of O_p, ρ=()=v_0v_1v_2⋯∉ O_p and thus p∉_(,). Since O_p ∈^p_(,v_i), there exists a winning strategy τ_p of p∈ (,v_i). Let σ'_p be the strategy obtained from σ_p and τ_p as follows: Until producing v_0v_1⋯ v_i, σ'_p is the same as σ_p. From v_i, σ'_p behaves as the same as τ_p. Therefore, ([p↦σ'_p]) equals v_0v_1⋯ v_i-1π for some play π of (,v_i), and π∈ O_p because τ_p is a winning strategy of p in (,v_i). From prefix-independence of O_p it follows that ([p↦σ'_p])∈ O_p. This contradicts the assumption that is an NE. (⇐) Let ρ=v_0v_1v_2⋯∈ be a play on and assume that v_iv_i+1v_i+2⋯∈ O_p for all p ∈ P and i≥ 0 such that v_i∈ V_p ∧ O_p ∈^p_(,v_i). We define a strategy profile =(σ_p)_p∈ P as the one that satisfies the following two conditions: First, produces ρ as its outcome, i.e., ()=ρ. Second, if some player p deviates from ρ at v_j ∈ V_p (j≥ 0) and O_p∉^p_(,v_j), then all the other players (as a coalition) play from v_j according to a winning strategy of -p for (_p,v_j) and O_p. (Note that in a 2-player zero-sum game, there is always a winning strategy for one of the players, and thus there is a winning strategy of -p for (_p,v_j) and O_p when O_p∉^p_(,v_j).) We can show that the strategy profile is a NE as follows: Assume that some player p deviates from σ_p to a strategy σ'_p∈Σ^p, and _([p↦σ'_p]) deviates from ρ at v_j∈ V_p for some j≥0. If O_p∈^p_(,v_j), then by assumption, v_jv_j+1v_j+2⋯∈ O_p. By the prefix-independence, ρ=_()∈ O_p and thus σ'_p is not a profitable deviation. Otherwise, as described above, all the other players (as a coalition) punish the player p by taking a winning strategy of -p for (_p,v_j) and O_p, and hence p∉([p↦σ'_p]). Therefore σ'_p is not a profitable deviation also in this case. Let = be a game arena and _j=(O_p^j)_p∈ P (1≤ j≤ n) be objective profiles such that O_p^j⊆ is prefix-independent for all p∈ P and 1≤ j≤ n. Then, a play ρ=v_0v_1v_2⋯∈ is the outcome of some (_1,…,_n)-NE ∈Σ, i.e., ρ=(), if and only if ∀ p∈ P. ∀ i≥ 0. 1≤∀ j≤ n. (v_i∈ V_p ∧ O_p^j∈^p_(,v_i)) ⇒ v_iv_i+1v_i+2⋯∈ O_p^j. Corollary <ref> can be easily proved by Proposition <ref> and Definition <ref>. Theorem <ref>. Let = be a game arena and _j=(O_p^j)_p∈ P (1≤ j ≤ n) be objective profiles over Muller objectives. Deciding whether there exists a (_1,…,_n)-NE is decidable. By Corollary <ref>, there exists a (_1,…,_n)-NE if and only if there exists a play ρ=v_0v_1v_2⋯∈ satisfying Condition (<ref>). Algorithm <ref> decides the existence of a play satisfying Condition (<ref>). In Algorithm <ref>, we call a game arena _V' = ({1}, V', (V'), v_0, E') satisfying V' ⊆ V, v_0 ∈ V' and E'={ (v,v') ∈ E | v,v'∈ V'} a 1-player subgame arena of (induced by V'). Let us show the correctness of Algorithm <ref>. First, we show that when Algorithm <ref> answers Yes, the outcome of the strategy answered by Algorithm <ref> satisfies Condition (<ref>). Let ρ=__V'(σ_1) = v_0v_1v_2 ⋯∈ for the strategy σ_1 returned by Algorithm <ref>. Because ρ is the outcome of a winning strategy for O__V', we have ρ∈ O__V'. By the definitions of O__V' and O_v, ρ∈ O__V' ∀ v∈ V'. ρ∈ O_v ∀ v∈ V'. ∀ p ∈ P. ∀ 1≤ j≤ n. (v∈ V_p ∧ O_p^j∈^p_(,v))⇒ρ∈ O_p^j. Because ρ is a play in _V', we have v_i∈ V' for all i≥ 0. Thus, ρ∈ O__V'⇒ ∀ p∈ P. ∀ i≥ 0. ∀ 1≤ j≤ n. (v_i∈ V_p ∧ O_p^j∈^p_(,v_i)⇒ρ∈ O_p^j). Because O^j_p is prefix-independent, ρ∈ O_p^j v_iv_i+1v_i+2⋯∈ O_p^j. Therefore ρ satisfies Condition (<ref>). Conversely, we show that if there exists a play ρ satisfying Condition (<ref>), then at least one nondeterministic branch of Algorithm <ref> should answer Yes with a strategy σ_1 such that ρ=__V'(σ_1). Assume that there exists a play ρ=v_0v_1v_2⋯∈ satisfying Condition (<ref>). Let V'={v ∈ V |∃ i≥0. v=v_i}, and then construct the 1-player subgame arena _V'=({1},V',(V'),v_0,E') with E' = { (v,v') ∈ E | v,v'∈ V'} and the objective O__V'=⋂_v∈ V'O_v where for all v∈ V', O_v = ⋂_O^j_p∈^p_(,v),1≤ j≤ nO_p^j for p∈ P such that v∈ V_p. It is easy to see that ρ is a play of _V' and ρ∈ O__V' by Condition (<ref>). Therefore, any strategy σ_1 that produces ρ is a winning strategy of the player 1 for _V' and O__V', and Algorithm <ref> should answer Yes with a strategy σ_1 such that ρ=__V'(σ_1).
http://arxiv.org/abs/2307.03123v1
20230706164906
Annealing for prediction of grand canonical crystal structures: Efficient implementation of n-body atomic interactions
[ "Yannick Couzinie", "Yusuke Nishiya", "Hirofumi Nishi", "Taichi Kosugi", "Yu-ichiro Matsushita" ]
quant-ph
[ "quant-ph", "cond-mat.dis-nn", "physics.chem-ph" ]
APS/123-QED couzinie.y.aa@m.titech.ac.jp Laboratory for Materials and Structures, Institute of Innovative Research, Tokyo Institute of Technology, Yokohama 226-8503, Japan Quemix Inc., Taiyo Life Nihombashi Building, 2-11-2, Nihombashi Chuo-ku, Tokyo 103-0027, Japan Laboratory for Materials and Structures, Institute of Innovative Research, Tokyo Institute of Technology, Yokohama 226-8503, Japan Quemix Inc., Taiyo Life Nihombashi Building, 2-11-2, Nihombashi Chuo-ku, Tokyo 103-0027, Japan Laboratory for Materials and Structures, Institute of Innovative Research, Tokyo Institute of Technology, Yokohama 226-8503, Japan Quemix Inc., Taiyo Life Nihombashi Building, 2-11-2, Nihombashi Chuo-ku, Tokyo 103-0027, Japan Laboratory for Materials and Structures, Institute of Innovative Research, Tokyo Institute of Technology, Yokohama 226-8503, Japan Quemix Inc., Taiyo Life Nihombashi Building, 2-11-2, Nihombashi Chuo-ku, Tokyo 103-0027, Japan Laboratory for Materials and Structures, Institute of Innovative Research, Tokyo Institute of Technology, Yokohama 226-8503, Japan Quemix Inc., Taiyo Life Nihombashi Building, 2-11-2, Nihombashi Chuo-ku, Tokyo 103-0027, Japan Quantum Material and Applications Research Center, National Institutes for Quantum Science and Technology (QST), 2-12-1, Ookayama, Meguro-ku, Tokyo 152-8552, Japan We propose an annealing scheme for crystal structures prediction (CSP) by taking into account the general n-body atomic interactions, and in particular three-body interactions which are necessary to simulate covalent bonds. The crystal structure is represented by discretizing the real space by mesh and placing binary variables which express the existence or non-existence of an atom on every grid point. We implement n-body atomic interaction in quadratic unconstrained binary optimization (QUBO) or higher-order unconstrained binary optimization (HUBO) problems and perform CSP by simulated annealing. In this study we successfully reduce the number of bits necessary to implement three-body interactions within the HUBO formulation of MoS_2 crystals. Further, we find that grand canonical simulation is possible by showing that we can simultaneously optimize for the particle density as well as the crystal structure using simulated annealing. In particular, we apply CSP to noble gasses, i.e. Lennard-Jones(LJ) solids, and show that the grand canonical calculation has a better time to solution scaling than its microcanonical counterpart. Annealing for prediction of grand canonical crystal structures: Efficient implementation of n-body atomic interactions Yu-ichiro Matsushita0000-0002-9254-5918 August 1, 2023 ======================================================================================================================= § INTRODUCTION There are various ways that annealing type or quantum algorithms have been applied in materials science. Optimizing the structure of N atoms whose interaction is governed by a potential function is classically done by simulated annealing (SA) performing a biased random walk on the energy landscape of the potential and accepting new states using the Boltzmann probability exp(-Δ E/kT) where Δ E is the change in energy and T the current temperature <cit.>. New states are generated by moving the atoms according to some set distribution. There is a range of variations of this scheme that either change how new candidate configurations are chosen or how the energy is calculated, a fairly recent review is given in <cit.>. The classical approaches have the advantage of not having to discretize the state space which at the same time precludes optimization on modern Ising machines based on quantum annealing (QA) <cit.> like D-Waves quantum annealer <cit.>. Computational chemistry is one of the fields where quantum computing on quantum hardware is expected to yield a significant speedup <cit.>. There is significant recent research effort on devising quantum algorithms for the electronic structure calculation problem and usually revolves around using the variational quantum eigensolver <cit.>. When it comes to quantum annealers the first hurdle is to encode chemically relevant information into an Ising hamiltonian. The direct approach is to use a second-quantized fermionic Hamiltonian describing the electronic structure and using the Brayvi-Kitaev transformation <cit.> to transform it into a qubit Hamiltonian and finally apply gadget-techniques to transform the Hamiltonian into a 2-local Hamiltonian of σ_z interactions <cit.>. For this approach, the number of required qubits scales exponentially with the number of atoms and thus is unfeasible for all but the smallest systems on current hardware. Another approach is to use density functional theory (DFT) <cit.>, or full configuration interaction (FCI) <cit.> calculations to build a quadratic unconstrained binary optimization problem (QUBO) whose solution solves the nuclear structure problem <cit.>. The indirect approaches are usually hybrid algorithms where QA is used to solve an isolated optimization problem of the otherwise classical calculation <cit.>. We provide here a scheme to encode three-body potentials that can be fully implemented on quantum hardware (after appropriately reducing the order of the Hamiltonian) and provide a first analysis using SA. We show that providing more physical information in the form of penalty terms does not necessarily speed up the calculation and provide a mathematically sound way to reduce cubic and higher-order terms that does not require any a priori physical information. § HUBO FORMULATION We look at a set X={x_1,…, x_n} in a unit cell corresponding to a uniform partitioning of each axis into g points and a set 𝒮={s_1,…,s_k} of atom species. Consider a set b_x^s of binary variables that we define such that if b_x^s=1 there is an atom of type s on x. Using a fitting n-body potential function we define the Hamiltonian to our higher-order unconstrained binary optimization problem as H = ∑_s∑_x∈X V^s_1(x)b_x^s +∑_s_1,s_2∑_x_1,x_2∈X V^s_i,s_j_2(x_1, x_2)b_x_1^s_1b_x_2^s_2 ⋯ + V_n(x_1,…, x_n) ∏_x∈X,s∈𝒮b_x^s. Finding the optimal nuclear structure on the lattice of the unit cell corresponds to finding an optimal binary string that minimizes this Hamiltonian, as energy contributions only arise if all binary variables involved in an interaction are 1. §.§ Penalty terms One can provide a priori physical information like the ground state particle number by adding penalty terms like P ( ∑_x∈Xb_x^s - 𝒞_s)^2 to the Hamiltonian for an appropriately large positive P and all s∈𝒮 where 𝒞_s is the target particle number for type s atoms. While these constraints simplify the potential energy landscape of the HUBO in <ref> they are not strictly necessary and leaving this term out allows for a grand-canonical calculation in which one optimizes not only for the nuclear configuration but for the optimal atom number 𝒞_s of the various types as well. Equivalently, knowing the chemical formula (e.g. Al_2(SO_4)_3) but not the unit cell size, a penalty term like P ( ∑_x∈Xb_pos^s_1 - c_s_1,s_2∑_x∈Xb_x^s_2)^2 ensures that the ratios of atoms are respected, where c_s_1,s_2 is the target ratio (in the above example c_ S, O=1/4). §.§ Simplification and quadratization If the interatomic potential has no cutoff the constructed HUBO describes a fully connected model. In order to reduce higher-order interactions we use the `deduc-reduc' method from <cit.>. In particular, we make the assumption that if the pairwise interaction between two binary variables is too high, then any higher-order interaction containing this pair can safely be set to 0 without influencing the ground state. The intuition behind this is that for the interatomic potentials we use in this work, the pairwise interaction rapidly increases if the atoms are too close, and thus the ground state does not contain atoms on the two involved locations and we do not need to evaluate the higher-order terms. This is a simplification that does not loose any generality with respect to the ground state of the HUBO and which in particular also does not require any a priori knowledge like atomic radii of the involved species. For quantum annealing and simulated quantum annealing simulations we still need to reduce the higher-order interactions to quadratic terms<cit.>. We use the pairwise covers approach <cit.> which provides a single step algorithm to reduce higher-order terms. A set, called the covering, of sets of binary variables is formed such that any non-zero interaction in the HUBO can be expressed as the union of two elements of the covering, so that introducing an auxiliary variable for each element of the covering the HUBO can be expressed as a QUBO of the auxiliary variables. Finding the optimal covering is the famous set cover problem and thus NP-complete. We construct a covering by counting all pairs of interactions and adding the pairs that occur most often until all interactions are covered. § METHODS We will find optimal binary strings for the HUBO problems using simulated annealing. annealing. In this section we outline what parameters and settings we used for the optimization and which potentials we use. §.§ Simulated Annealing Simulated annealing is a classic algorithm for optimizing cost functions with several local minima <cit.>. We assume some basic knowledge of the algorithm and will only discuss the specifics of our implementation. We use a geometric cooling schedule T(x) = T_max( T_min/T_max)^x/N_steps, x∈ [0, N_steps], where T_min and T_max are the minimum and maximum temperature. The number of steps N_steps is the number of monte carlo steps per spin to perform. Choosing the right neighbourhood for a configuration in simulated annealing (i.e. defining legal transitions of the Markov chain) is crucial and generally one aims to have a smooth energy landscape with not too rugged local minima <cit.>. Traditionally, simulated annealing for HUBOs performs single bit flips. As this is equivalent to removing or adding an atom from the configuration, especially in the presence of penalty terms, this can be a costly operation. Thus, for each step in the schedule we loop over every binary variable and attempt to flip it and then we loop over every opposite valued pair in the current configuration and attempt to exchange their values. This latter flip moves an existing atom to a random location and does not break penalty terms like (<ref>), thus ensuring a smoother energy landscape. So when we speak of Monte Carlo steps per spin we mean that we attempt n· k + n· k2 spinflips where n· k is the (unreduced) binary variable number. §.§ Benchmarking For benchmarking the various optimization schemes for the HUBO and QUBO formulation we use the time-to-solution <cit.> given by TTS(τ) = τln(1-p_r)/ln(1-ℙ_ GS(τ))= τln(0.01)/ln(1-ℙ_ GS(τ)) where τ is the running annealing time as measured on the local machine and ℙ_ GS(τ) is the probability of the corresponding algorithm to return the ground state with a running time of τ. The time-to-solution can be understood as the average time it takes to get the ground state with probability p_r which we set to 0.99. §.§ Analysed systems For the calculation of the potential functions we rely on the Open Knowledgebase of Interatomic Models (OpenKIM) <cit.>. In particular we will look at a three dimensional cubic unit cell of side length 5.653Å with the Lennard-Jones potential parameters due to Bernades for Krypton <cit.> and periodic boundary conditions. We will look for the ground state configuration of Krypton atoms in this unit cell discretized into a equipartitioned lattice of size g^3, which is equal to the face-centered cubic configuration and can be seen in <ref>. We will simply refer to this system as the LJ system. For the second system we consider the Stillinger Weber potential <cit.> which is a simple three-body potential that reflects covalent bond dynamics. We use the parametrization for monolayer Molybden-Disulfide due to Wen et al.<cit.>. We do this on the supercell consisting of a 2× 2 lattice of hexagonal lattice unit cells with lattice constant 3.20Å and thickness of 3.19Å. We equipartition the hexagonal lattice vectors into g parts each and apply periodic boundary conditions in the two-dimensional direction while we have no periodic boundary conditions in the third dimension which we split only into three lattice points, the amount of bits thus scales like 3g^2. The target ground state can be seen in <ref>. We will refer to this system as the MoS2 system. § RESULTS & DISCUSSION §.§ LJ grand canonical clusters In <ref> we plot the minimum time to solution against various grid spacings g for calculations with the penalty term as in <ref> with 𝒞_Kr=4 and without. As the time for each g will usually be convex in the schedule step number the minimum is easily determined. The temperature range goes from 1 to 0.01 and the penalty strength P is chosen as 10 arb. units (for comparison, the ground state energy of the system is around -52.47 arb. units) and we took the average TTS over 1000 annealing runs. While there are fewer data points for the microcanonical calculations it is apparent that the time to solution is magnitudes slower than for the grand canonical case with for example the time to solution for g=10 being on the order of O(10 ms) for the grand canonical case while it is of the order O(1s) for the micro canonical case. A potential explanation for this is that due to the penalty terms, there are fewer local minima, but those local minima are deeper and thus if the there are not enough Monte Carlo steps per spin or the temperature is too low one gets more easily trapped in a local minimum rather than the global minimum. Note also how the curve of total attempted spin flips (solid line) fits the data well for any g indicating, that the difficulty of the problem does not scale with the actual problem size as the dominating factor in the scaling of the time to solution is the increase in time due to the presence of more spins. As quantum hardware does not have the same time scaling with the spin number <cit.> it might be favorable to take as high a g as fits onto the quantum processor before attempting continuous local optimizations like gradient descents to optimize the structure beyond the limitations of the lattice. Further, in <ref> we show a representative energy histogram for the calculations with 3 monte carlo steps per spin for g∈{12, 14, 16, 18, 20}. Note that, despite not putting any particle number restrictions on the HUBO we see in <ref> that the annealing process does not get trapped in the local minima with 3 or respectively 2 Krypton atoms, which coupled with <ref> suggests that not adding penalties might be overall a good approach to solving <ref>. Further, note that none of the local minima above the ground state found in <ref> correspond to physical local minima. Upon local gradient descent after the simulated annealing all local minima collapse into the ground state. Observe that the average observed residual energy is not an increasing function of g as might be expected but that each g has its own set of local minima that tend to trap the simulated annealing algorithm. This is of course purely due to the choice of neighbourhood of the simulated annealing algorithm and it can be expected that quantum hardware produces entirely different histograms. §.§ Reduced HUBO performance for MoS_2 The HUBO for the MoS_2 system with g=6 has 1401159 third order interaction terms, in <ref> we have listed the percentage of remaining third order interaction terms after applying our deduc-reduc method with the shown threshold. Using thresholds lower than 100 affected the ground state and using values higher than 500 did not show a significant decrease in spins as the mesh is not fine enough to allow for atoms to be close enough for the potential energy to get much higher. As this does not significantly reduce the order of number of higher-order interaction terms, the deduc-reduc and pairwise cover approach is not enough to allow for implemenation on current iterations of D-Wave annealers <cit.> but represents a significant step in that direction. We did not observe a meaningful effect on the time to solution using this measure and using 100 Monte Carlo steps per spin we found the 2h configuration with a probability of 28% and the 1t configuration with a probability of 7% showing the predictive capabilities of our approach also for non-global local minima in the potential energy surface. § CONCLUSIONS In this paper we have presented an annealing scheme for crystal structure prediction based on n-body atomic interactions. We discretized a given unit cell with a lattice and placed binary variables on the lattice points to express the existence or non-existence of an atom at every grid point. In particular this is done for 3-body atomic interactions which is the minimum order necessary for covalent crystals. We solved the resulting HUBOs using simulated annealing giving insights into the crystal structure. We have shown that a grand canonical calculation without penalty terms allows for the simultaneous optimization of both the nuclear structure as well as the particle density inside the unit cell while reducing the actual time to solution. Further, we have also shown evidence that the difficulty of solving the nuclear structure problem does not necessarily scale with the mesh size. These results show that it might not always be advantageous to put all the available information into the QUBO to speed up calculations and that on quantum hardware it might be best to choose g as large as possible since the execution speed does not scale with the absolute number of spins, as SA does. For the higher-order systems we have proposed an interaction reduction scheme that allowed for a reduction in the order of 20% of third order interaction terms while maintaining physical accuracy. Even for the simplest system we still have O(10^6) third order interaction terms precluding actual implementation on current iterations of quantum annealers, so that further reductions are necessary. Note added. During the writing of this manuscript we have become aware of a similar proposal for the construction of the QUBO <cit.>. That paper does not address higher-order optimization problems and thus does not address covalent bonds and did not consider the grand canonical case, they do provide a hybrid scheme for continuous optimization after obtaining the solution of the QUBO. The authors wish to thank Prof. Hidetoshi Nishimori for the insightful discussions on the subjects of this paper. This work was supported by JSPS KAKENHI as “Grant-in-Aid for Scientific Research(A)” Grant Number 21H04553. The computation in this work has been done using Supercomputer Center at the Institute for Solid State Physics in the University of Tokyo and TSUBAME3.0 supercomputer provided by Tokyo Institute of Technology.
http://arxiv.org/abs/2307.00995v1
20230703131855
Towards Suicide Prevention from Bipolar Disorder with Temporal Symptom-Aware Multitask Learning
[ "Daeun Lee", "Sejung Son", "Hyolim Jeon", "Seungbae Kim", "Jinyoung Han" ]
cs.CL
[ "cs.CL" ]
Towards Suicide Prevention from Bipolar Disorder with Temporal Symptom-Aware Multitask Learning]Towards Suicide Prevention from Bipolar Disorder with Temporal Symptom-Aware Multitask Learning Dept. of Applied Artificial Intelligence Sungkyunkwan University Seoul Republic of Korea delee12@skku.edu Dept. of Applied Artificial Intelligence Dept. of Human-AI Interaction Sungkyunkwan University Seoul Republic of Korea maze0717@g.skku.edu Dept. of Applied Artificial Intelligence Sungkyunkwan University Seoul Republic of Korea gyfla1512@g.skku.edu Computer Science and Engineering College of Engineering University of South Florida Tampa Florida USA seungbae@usf.edu Corresponding author. Dept. of Applied Artificial Intelligence Dept. of Human-AI Interaction Sungkyunkwan University Seoul Republic of Korea jinyounghan@skku.edu Bipolar disorder (BD) is closely associated with an increased risk of suicide. However, while the prior work has revealed valuable insight into understanding the behavior of BD patients on social media, little attention has been paid to developing a model that can predict the future suicidality of a BD patient. Therefore, this study proposes a multi-task learning model for predicting the future suicidality of BD patients by jointly learning current symptoms. We build a novel BD dataset clinically validated by psychiatrists, including 14 years of posts on bipolar-related subreddits written by 818 BD patients, along with the annotations of future suicidality and BD symptoms. We also suggest a temporal symptom-aware attention mechanism to determine which symptoms are the most influential for predicting future suicidality over time through a sequence of BD posts. Our experiments demonstrate that the proposed model outperforms the state-of-the-art models in both BD symptom identification and future suicidality prediction tasks. In addition, the proposed temporal symptom-aware attention provides interpretable attention weights, helping clinicians to apprehend BD patients more comprehensively and to provide timely intervention by tracking mental state progression. <ccs2012> <concept> <concept_id>10010147.10010257.10010258.10010262</concept_id> <concept_desc>Computing methodologies Multi-task learning</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10010147.10010178.10010179</concept_id> <concept_desc>Computing methodologies Natural language processing</concept_desc> <concept_significance>300</concept_significance> </concept> <concept> <concept_id>10010405.10010444.10010449</concept_id> <concept_desc>Applied computing Health informatics</concept_desc> <concept_significance>500</concept_significance> </concept> </ccs2012> [500]Computing methodologies Multi-task learning [300]Computing methodologies Natural language processing [500]Applied computing Health informatics [ Jinyoung Han August 1, 2023 ================== § INTRODUCTION Suicide is a severe health concern worldwide. According to the OECD, 14.1 per 100,000 people die yearly from suicide in the United States[<https://data.oecd.org/healthstat/suicide-rates.htm>]. Unfortunately, most suicides have been committed by individuals with mental illness <cit.>. Particularly, people living with bipolar disorder (BD) are more vulnerable to suicide than people with other psychiatric disorders <cit.>. It has been reported that the suicide rate for BD patients is up to 30 times higher than that of the general population <cit.>, and suicide fatalities occur in 10-20% of adults who suffer from BD <cit.>. With increasing importance in understanding and analyzing BD patients <cit.>, recently, there has been an effort to analyze distinct behavioral characteristics of BD patients and assess their mental states <cit.> using social media data where they share their daily lives and emotions <cit.>. However, while the prior work has revealed valuable insights into understanding the behavior of BD patients revealed on social media, little attention had been paid to developing a model that can predict the future suicidality of a BD patient. Although a few studies have proposed methods to identify the current risk of suicide in a given social media post <cit.>, suicidal ideation can often quickly lead to an actual attempt, thereby making them ineffective in preventing suicide <cit.>; hence, exploring the BD's risk factors that can lead to suicide ideation for predicting future suicidality is crucial. Therefore, this paper aims to predict the future suicidality of BD patients based on their mood symptoms history revealed in their past social media data, which has not been thoroughly investigated. To this end, we first create a novel BD dataset clinically validated by psychiatrists, including future suicidality and bipolar symptoms. Here, we focus on which bipolar symptoms users have, rather than what diagnosed bipolar types they have, because a transdiagnostic approach helps improve understanding of comorbidity, enabling proper interventions than a diagnostic approach <cit.>. BD is a mood disorder characterized by manic and depressive episodes where two phases show a recurrent pattern that appears and increases over a while, but the following important attribute is not easily considered; many psychological processes are shared in various diagnoses  <cit.>, e.g., anxiety can appear in both depression and manic episodes of BD <cit.>. Figure <ref> illustrates example posts written by an individual with BD gradually leading to suicide. Therefore, timely tracking of mood symptoms that affect future suicidality is inevitable for early intervention, leading to shorter treatment periods and better prognosis in BD patients. However, with the rapid mood swings in BD and limited self-reports from patients, there is a significant gap in understanding the actual path of mood changes between the real world and the conventional clinical setting where clinicians can only see patients under limited conditions and rely on the subjective words of the patients <cit.>. Hence, using real-world data derived from patient reports at the nonclinical scene, such as social media, is helpful to understand better BD symptoms <cit.>. Particularly, we collect social media posts from BD communities on Reddit. We then labeled our dataset, which contains 7,592 posts published by 818 users, following the guidelines outlined in the Columbia Suicide Severity Rating Scale (C-SSRS) <cit.> and Bipolar Inventory of Symptoms Scale (BISS) <cit.> for annotating suicidality and bipolar-related symptoms, respectively. Given the significance of clinical understanding, two psychiatrists validate the annotated dataset with a pairwise annotator agreement of 0.77 and a group-wise agreement of 0.88. Unlike the existing datasets <cit.>, the proposed dataset both includes (i) future suicidality of BD patients and (ii) a user's mood history that can be important features for diagnosing mood episodes <cit.> and future suicidality <cit.>. Based on the developed BD dataset, we propose a novel multi-task learning framework to jointly learn (i) the future suicidality of a given BD user and (ii) their BD symptoms over time. Since a BD symptom can contribute to future suicidality differently depending on when it occurs, we suggest a temporal symptom-aware attention method to determine which symptoms are the most influential for predicting future suicidality over time. In particular, the proposed multi-task learning model has three components: (i) the contextualized post-encoder, (ii) a temporal symptom-aware attention layer, and (iii) a task-dependent multi-task decoder. After the model generates post representations using the pre-trained Sentence-BERT (SBERT) <cit.> in the contextualized post encoder, the bi-LSTM layer encodes a sequential context of post representations considering variable time intervals between posts. The temporal symptom-aware attention layer then calculates the attention weights of posts to give more weight to critical symptoms affecting the risk classification decision. Finally, the multi-task decoder estimates the probability of future suicidality levels and BD symptoms. For effective multi-task learning, we sum up the losses for each task using the uncertainty weight loss method <cit.>, evaluating the task-dependent uncertainty of each task. The proposed model can capture the progressive patterns of BD symptoms and outperform the state-of-the-art methods for predicting future suicidality by leveraging the benefits of multi-task learning. Furthermore, investigating the attention weights based on BD symptoms helps to interpret how they affect the user's future suicidality over time. The provided interpretability from our model can support clinicians in improving their understanding of connections between psychiatric conditions and allowing proper interventions for at-risk people. We summarize the contributions of this work as follows. * We release our codes and a novel BD dataset[<https://sites.google.com/view/daeun-lee/dataset/kdd-2023>], which contains both the future suicidality and BD symptoms labels, validated by two psychiatrists. The dataset can benefit researchers aiming to develop methods for suicide prevention. * To the best of our knowledge, this is the first study that proposes a multi-task learning model for predicting the future suicidality of BD patients on social media by leveraging the knowledge of bipolar symptoms (i.e., manic mood, somatic complaints). The model can accurately capture bipolar symptom transition patterns and outperform the state-of-the-art methods for detecting future suicidality. * The proposed temporal symptom-aware attention method provides interpretability, which can help clinicians understand BD patients more comprehensively, thereby providing timely interventions by tracking mental state progression. § RELATED WORK Social Media to Understand Bipolar Disorder. With the proliferation of social media, many studies have attempted to address the severe social problems of BD using user activity data on social media <cit.>. For example, <cit.> showed differences in language use between users with and without BD on Reddit, and this characteristic has been utilized to discover the risk of BD <cit.>. Several studies have analyzed (i) the living experience of BD patients <cit.> and (ii) an understanding of how people perceive their mental states and share their experiences <cit.> using social media data through qualitative studies. However, while the prior work on BD analysis has revealed valuable insight into the characteristics of individuals with BD, little attention has been paid to predicting the future suicidality of BD patients. Because suicidal ideation can often be developed into an actual attempt <cit.>, such a model that predicts future suicidality of BD patients can be used for BD patients who usually have suicidal ideation <cit.>. Future Suicidality Assessment Using Social Media Data. While most of the work has focused on identifying the current suicidality revealed in a given post from social media <cit.>, a few studies have investigated monitoring a transition of users who have not yet shown suicidality but would potentially reveal it in the future. <cit.> strived to predict whether adolescents will show suicidal intentions within a month of using Instagram with an ensemble model. Similarly, <cit.> attempted to discover the current suicidality of individuals who posted on mental-health-related communities on Reddit by identifying whether a user would write on SuicideWatch, a suicide-related subreddit. They found that users who show suicidality tend to reveal changes in linguistic structures, interpersonal awareness, and social interactions. Unlike the previous work, we predict future suicidality of BD patients by considering the past temporal transition behavior since BD patients suffer from such mood change symptoms. To the best of our knowledge, this is the first work that proposes a future suicidality prediction model by jointly learning two tasks, predicting (i) future suicidality and (ii) BD symptoms. § BIPOLAR DISORDER DATA   [-0.3in] §.§ Data Collection and Preprocessing Collecting Data. We collected posts published between January 1st, 2008, and September 30th, 2021, from the three representative bipolar-related subreddits, including r/bipolar (BPL), r/BipolarReddit (BPR), and r/BipolarSOs by using the open-source Reddit API[<https://www.reddit.com/dev/api/>]. To identify individuals who exhibit suicide ideation, we also collected all the posts during the same data collection period from r/SuicideWatch, where people share their suicidal thoughts with others. Among the collected posts, we used the posts written by users who have been professionally diagnosed with BD <cit.> in this study. For example, users who reported BD diagnosis, e.g., a user who wrote, “I was diagnosed with Bipolar type-I last year.”, were included in our study. Thus, posts regarding BD symptoms of other individuals, including family members or friends but not themselves, were excluded. Finally, our dataset contains 7,592 posts published by 818 users, i.e., BD patients. Preprocessing Data. We first anonymized the collected posts by removing information that could be used as personal identifiers. We then converted the texts to lowercase, removed special characters, striping whitespaces, and stopwords, and lemmatized them. §.§ Annotation Process To label the collected Reddit dataset, we recruited four researchers, knowledgeable in psychology and fluent in English, as annotators. With the supervision of a psychiatrist, the four trained annotators labeled 818 users and their 7,592 anonymized Reddit posts using the open-source text annotation tool Doccano <cit.>. During annotations, we mainly consider two different label categories: (i) BD symptoms (e.g., manic, anxiety) and (ii) suicidality levels (e.g., ideation, attempt). We further annotate the diagnosed BD type (e.g., BD-I, BD-II) for data analysis. If there is any conflict in the annotated labels across the annotators, all the annotators discuss and reach an agreement under the supervision of the psychiatrists. The information about the final annotated labels in our data is summarized in Table <ref>. We now briefly describe the definition of each category; more details about each category and the corresponding examples are described in Appendix <ref>. A. Diagnosed Bipolar Disorder Types: We label users into one of the three BD diagnosis types based on the self-report in their posts. The BD diagnosis types include Bipolar Disorder-I (BD-I), Bipolar Disorder-II (BD-II), and Not Otherwise Specified Bipolar Disorder (NOS) based on the Diagnostic and Statistical Manual of Mental Disorders (DSM-5) <cit.> and the International Statistical Classification of Diseases and Related Health Problems (ICD-10) <cit.>. B. Bipolar Disorder Symptoms: We employed the Bipolar Inventory of Symptoms scale (BISS) <cit.> to cover mood polarity (i.e., manic, depressed) and the spectrum of BD symptomatology (i.e., psychosis, somatic complaints). Accordingly, we first annotate mood symptoms consisting of Depressed, Manic, Anxiety, Remission, Irritability, and Other. Note that Other covers moods that do not fall into the other five mood symptoms. If we find any BD somatic symptoms, we annotate an additional somatic symptom label that includes Somatic complaint, Psychosis, and Both. C. Levels of Suicidality: We also annotate the posts to categorize them into different suicidality levels. We utilize the existing criteria from <cit.> that provide common five levels of suicidality, including No risk (NR), Suicide Indicator (IN), Suicidal Ideation (ID), Suicidal Behavior (BR), and Actual Attempt (AT), based on the Columbia Suicide Severity Rating Scale (C-SSRS) <cit.>. We merge No Risk with Suicide Indicator since people with bipolar disorder are already considered more at risk than the general population in suicide <cit.>. §.§ Evaluation of Annotation §.§.§ Expert Validation. Since the accuracy of the labels (e.g., suicidality, BD symptoms) in the dataset is crucial, we validate the annotated BD dataset with two psychiatrists, as domain experts, by providing 212 randomly selected posts published by 25 users. Table <ref> summarizes the Krippendorff’s alpha-reliability <cit.> and Cohen’s Inter-Annotator Agreement <cit.> among the experts and annotators. The results suggest that our annotations in the dataset are reliable as the overall Krippendorff scores show high agreement, for example, 0.88 for suicidality and 0.76 for mood symptoms, which is similar or even higher than previous studies (e.g., α=0.69 <cit.>). The maximum and minimum pairwise Cohen's scores present a fair agreement of 0.89 and 0.66, respectively. §.§.§ Comparison with Existing Datasets. We compare our BD dataset with four widely-used datasets <cit.> from prior studies on suicide and mental health in Table <ref>. First, we find that only two datasets have been released publicly, while the other datasets have not been disclosed. More importantly, no existing dataset has both suicidality and BD symptom labels. The existing suicide datasets <cit.> do not include BD-specific and future suicide information. The prior BD datasets <cit.> have no suicidality labels, and their BD diagnosis labels were generated computationally without expert validation. The proposed BD dataset, on the other hand, includes future suicidality and BD symptom labels, which are validated by clinical experts. §.§ Timeline (Post Sequence) Construction   BD patients tend to have more severe symptom changes over time than other mental disorders; thus, suicidality constantly changes. Since users' posts on different timelines show disparate BD symptoms and suicidality levels, it is crucial to predict future suicidality in each timeline and understand how long past posts should be learned to predict future suicidality. Therefore, we construct multiple timelines (i.e., post sequences) for each user, as shown in Figure <ref>. We set a timeline by selecting the past l months for BD symptom observation and the future m months for suicidality identification. We then slide this timeline window within a given user's posts to obtain multiple post sequences from each user. We assign the future suicidality label as the highest level of the suicidality that appeared in posts over the next m months and exclude sequences with less than three posts in the training period (i.e., past posts). By experimenting with different sets of l and m, we set (l, m) = (6, 1), i.e., using the past 6 months for training and the future 1 month for suicidality label extraction, as it shows the best performance. We present the performances of our model with different post-sequence durations in Section <ref>. Therefore, we obtain 5,961 post sequences S. The distribution of suicidality labels for the sequences is 5,056 (IN), 591 (ID), 215 (BR), and 99 (AT). §.§ Data Analysis In this section, we analyze our BD dataset to understand distinct BD symptom patterns for BD patients who potentially have high suicidality in the future. We then assess the survival probability using the Kaplan-Meier estimation <cit.> for each BD type (i.e., BD-I, BD-II, and NOS). §.§.§ BD Symptoms Affecting Future Suicidality To verify the factors associated with the risk of suicide in the future, we classify the dataset into two groups: i) low-risk group (i.e., IN) and ii) severe-risk group (i.e., ID, BR, AT). We then compare the two groups in terms of the LIWC (Linguistic Inquiry and Word Count) <cit.> results of the users' posts and the annotation results using the t-test. As shown in Table <ref>, the target group shows a significantly higher level of past suicidality than the control group. This reveals that a history of suicidality is a significant suicide risk factor in BD patients <cit.>. Furthermore, we observe that the severe-risk group shows more elevated depressed mood, irritability, and psychosis than the low-risk group but is less manic. This observation aligns with the clinical studies that identified dominant depression mood <cit.>, irritability <cit.>, and psychotic features <cit.> as major suicide risk factors in BD patients, but the mania status is not significantly related <cit.>. We also find similar results in the LIWC categories, which reveal higher values in negemo, sad, and anger for the severe-risk group. Unlike previous studies <cit.>, the ratios of anxiety for the two groups are not statistically different. This implies that social media posts make it difficult to detect clinical anxiety accompanied by physical symptoms like agitation, raised blood pressure, or sweating <cit.>. Furthermore, we compare the social characteristics of the two groups. We discover that the target group uses more family-related words. It could link to negative experiences with family, which often affect their lives, such as a lack of family support, divorce, or unmarried <cit.>. We also find that most people who mentioned work-related words belong to the control group, indicating they might be paid employees or students. According to the previous study <cit.>, unemployment is also associated with higher suicide rates. Overall, the analysis results demonstrate that the diverse symptom-related factors affecting users' future suicidalities revealed in social media data show a similar pattern with a clinical trial, which helps understand the living experience of BD patients when clinicians make decisions. More details are included in Table <ref> in Appendix. §.§.§ Survival Analysis We next assess the survival probability for each BD subtype (i.e., BD-I, BD-II, and NOS) using the Kaplan-Meier estimation <cit.>. Following the estimation method <cit.>, we observe 180 days after a certain time to verify whether a user is still alive in our dataset. Note that we assume a user has not survived if the user has never posted within the observation period. Figure <ref> shows that BD-II patients have the lowest survival rate, followed by BD-I. This interpretation aligns with the prior clinical studies that present BD-II as having higher suicidality than BD-I, and the rapid cycling of BD-II is hazardous <cit.>. §.§ Ethical Concerns We carefully consider potential ethical issues in this work: (i) protecting users' privacies on Reddit and (ii) avoiding potentially harmful uses of the proposed dataset. The Reddit privacy policy explicitly authorizes third parties to copy user content through the Reddit API. We follow the widely-accepted social media research ethics policies that allow researchers to utilize user data without explicit consent if anonymity is protected <cit.>. Any metadata that could be used to specify the author was not collected. In addition, all content is manually scanned to remove personally identifiable information and mask all the named entities. More importantly, the BD dataset will be shared only with other researchers who have agreed to the ethical use of the dataset. This study was reviewed and approved by the Institutional Review Board ((SKKU2022-11-038)). § FUTURE SUICIDALITY PREDICTION MODEL FOR BIPOLAR DISORDER PATIENTS §.§ Problem Statement The proposed multi-task learning model aims to (i) predict the future suicidality y_fs∈{IN, ID, BR, AT} of s_i through a sequence of BD posts P_i in the timeline and (ii) classify BD symptoms y_bd_n∈{. No mood, Depressed, Manic, Irritability, Anxiety, and Remission or Somatic complaint and Psychosis.} that appeared in a post p_t_n^i. We suppose each post shows one BD mood symptom and, at most, two BD somatic-related symptoms. To be more specific, assume that there is a post sequence s_i∈ S={ s_1,s_2,...,s_i}, it can be defined as s_i = { P^i,{y_bd_n}_n=1^| P^i| ,y_fs}. Here, P_i = { p_t_1^i,p_t_2^i ,...,p_t_n^i} represents a set of posts ordered by the posting time where n denotes the number of posts of s_i and t_n indicates the posting time of the n_th post. Also, y_bd_n is a set of BD symptom labels of p_t_n^i, and y_fs is a future suicidality label of s_i. Note that the time interval between t_n and t_1 is within l months since we take the past l months dataset for feature extraction (See <ref>). Figure <ref> illustrates the overall architecture of the proposed model. The model includes three main components: a Contextualized Encoder, a Temporal Symptom-aware Attention Layer, and a Task-dependent Multi-task Decoder. §.§ Contextualized Post Encoder   Each post includes BD-related information about a user. A sequence of posts can show the progressive mood states, which is important information for assessing future suicidality <cit.>. To generate the semantic representation of each post, we employ the pre-trained Sentence-BERT (SBERT) <cit.> , which showed promising results in detecting moments of change in the mood <cit.> and representing historical tweets  <cit.>. SBERT is a modification of the pre-trained BERT network that uses siamese and triplet network structures to derive semantically meaningful sentence embeddings by computing the mean of output vectors for all tokens to derive a fixed-size sentence embedding. We encode each post p_t^i as follows: e_t^i = SBERT(p_t^i) ∈ IR^1024 §.§ Temporal Symptom-aware Attention Layer   §.§.§ Sequential context modeling To encode a sequential context of each timeline, we leverage the bidirectional LSTM, a popular method for capturing long-term dependency on social media <cit.>. Specifically, post-encoding e_t^i is fed into a Bidirectional LSTM to derive text representation h_t^i. This process is repeated twice, each of which processes the post sequence from left to right (i.e., forward) and right to left (i.e., backward). Finally, the hidden state vectors from each procedure are concatenated as follows: h_t^i = LSTM ( e_t^i, h^i_t-1 ) h_t^i = LSTM ( e_t^i, h^i_t+1 ) h_t^i = [ h_t^i, h_t^i ] In this way, the BiLSTM converts the sequence representation of posts E = [ e_t_1^i, e_t_2^i, ..., e_t_n^i ] into contextual representations H = [ h_t_1^i, h_t_2^i, ..., h_t_n^i ] ∈ IR^d×n where d is the dimension of the hidden state vector. §.§.§ Temporal symptom-aware attention mechanism We then apply the attention mechanism to pay more attention to a critical mental state affecting the risk classification decision. However, conventional attention mechanisms, such as self-attention  <cit.>, do not consider the BD characteristics that each symptom can contribute differently depending on when it occurs. Since the time intervals between posts may vary considerably, identifying these patterns can be essential in interpreting the mood status over time <cit.>. Therefore, we propose temporal symptom-aware attention (Temp SA attention) as follows: g^i = ∑_t=1^t_na_t^i h_t^i a_t^i = exp(tanh(ℱ(δ(h_t^i, Δ_t))))/∑_t=1^t_nexp(tanh(ℱ(δ(h_t^i, Δ_t)))) δ(h_t^i, Δ_t) = sigmoid(θ_h -μ_hΔ_t)h_t^i where ℱ is a fully-connected layer and tanh() is the activation function. θ_h is the symptom-specific learnable parameter influenced by h_t^i, and μ_h is also a learnable parameter representing how the influence of h_t^i changes over time. Δ_t is the time interval between the most recent post h_t_n^i and target post h_t^i. The sigmoid function transforms θ_h -μ_hΔ_t into a probability between 0 and 1. Finally, we derive a sequence representation g_i∈ G = { g_1,g_2,...,g_i} where a_t^i indicates how symptom-specific information δ(h_t^i, Δ_t) at Δ_t ago affects the future condition. §.§ Task-dependent Multi-task Decoder   §.§.§ Future suicidality prediction To predict the suicidality of each sequence in the future, the proposed decoder generates the final prediction vector as follows: ŷ_fs = ℱ_a(ReLU(ℱ_b(g_i))) where ℱ_a,ℱ_b are fully-connected layers and ReLU is an activation function. Inspired by <cit.>, we apply the ordinal regression loss <cit.> as an objective function. Rather than employing a one-hot vector representation of the actual labels, a soft encoded vector representation is used to consider the ordering nature between suicidality. Assume that Y_fs = {IN = 0, ID = 1, BR = 2, AT = 3 } = {r_i_i=0^3 } denotes ground truth labels, then soft labels are computed as probability distributions y_fs = [y_0, y_1, y_2, y_3] of Y_fs as follows: y_fs_i = e^-ϕ (r_t,r_i)/∑_k=1^λe^-ϕ (r_t,r_i)∀ r_i ∈ Y_fs where e^-ϕ (r_t,r_i) is a cost function that penalizes the distance between the actual level r_t and a risk-level r_i ∈ Y, which is formulated as e^-ϕ (r_t,r_i) = α | r_t-r_i |, where α is a penalty parameter for inaccurate prediction. Finally, the cross-entropy loss is calculated as follows: ℒ_fs = - 1/b∑_j=1^b∑_i=1^λy_fs_ijlogŷ_fs_ij where b is the batch size, and λ is the number of risk levels. §.§.§ BD symptom classification In addition to future suicidality prediction as the main task, we propose to enhance the model by regarding an auxiliary task, bipolar disorder symptom classification. If the post features are good predictors for BD symptoms, the derived information from post features in the auxiliary task can also be leveraged into the main task. By taking the representation, e_t^i derived from the post encoder layer for each post p_t^i, the model calculates the logits of symptom classification as follows: ŷ_bd = ℱ_c(ReLU(ℱ_d(e_t^i))) where ℱ_c,ℱ_d are fully-connected layers and ReLU is an activation function. BD symptom classification can also be treated as a multi-label classification. Hence, the objective function can be written as follows: ℒ_bd = - 1/b∑_j=1^b∑_i=1^γy_bd_ijlogŷ_bd_ij + (1-y_bd_ij)log(1- ŷ_bd_ij) where b is the batch size, and γ is the number of symptom categories. §.§.§ Multi-task learning Since our multi-task learning model aims to solve tasks with different scales, i.e., post-level BD symptom prediction and sequence-level suicidality prediction, tuning weights between each task's loss is complicated and costly. Therefore, for effective multi-task learning, we employ the uncertainty weight loss <cit.> that weighs multiple loss functions to simultaneously learn various scales of different units by evaluating the task-dependent uncertainty of each task. Finally, the ultimate objective for multi-task learning is summing up the losses. ℒ_total = 1/2σ _fs^2ℒ_fs(W) + 1/2σ _bd^2ℒ_bd(W) + logσ _fsσ _bd where σ _bd,σ _fs are the learnable parameters representing uncertainty for each task, and W is the weight parameter. § EXPERIMENTS §.§ Baselines Since predicting future suicidality from BD patients has not been explored in the literature, we compare against baseline approaches from the related tasks, i.e., identifying current risks of suicidality. All the baseline models were developed by considering a sequential context of post representations in detecting suicidality. * Suicide Detection Model (SDM) <cit.>: The SDM adopts the LSTM layer with an attention mechanism. Fine-tuned FastText embeddings are utilized for encoding posts. * C-CNN <cit.>: The C-CNN is trained with posts that are encoded by ConceptNet word embeddings  <cit.>. * SISMO <cit.>: The SISMO uses Longformer <cit.> and the Bidirectional LSTM to obtain dynamic post embeddings. * STATENet <cit.>: The STATENet is a time-aware transformer-based model that uses emotional and temporal contextual cues for suicidality assessment. * UoS <cit.>: UoS is the best performing model at the CLPsych 2022 shared task with <cit.> to capture moments of change in a suicidal individual's mood. The obtained embeddings from the pretrained Sentence BERT are fed into a biLSTM layer and a multi-head attention layer. §.§ Experimental Settings To solve the imbalanced data issue, the random oversampling technique <cit.> is used to generate new train samples by randomly sampling each class independently with the replacement of the currently available samples. All experiments are performed with the stratified 5-fold cross-validation, ensuring that the users in the test set are entirely disjoint and do not overlap with those in the training set. We use 10% of the training set as validation during training to tune our models' hyper-parameters. For reproducibility, detailed experimental settings are summarized in Appendix <ref>. § RESULTS §.§ Model Performance Table <ref> summarizes the weighted average precision, recall, and F1-score of the proposed model and the baselines for the future suicidality and the bipolar symptom prediction tasks. Future suicidality task: Since our dataset is disproportionate across the suicide risk levels as shown in Table <ref>, we conduct experiments over the three classification tasks: (i) 2-level, (ii) 3-level, and (iii) 4-level classifications, as shown in Table <ref>. For example, we combine AT and BR categories to the highest risk level for the 3-level classification; we merge AT, BR, and ID for the 2-level classification. As shown in Table <ref>, we find that the proposed model outperforms all the baseline methods regardless of how the suicide risk level is structured. We observe that STATENet <cit.> shows the lowest performance among the baseline methods. That is because STATENet fails to utilize sequential data, while other baselines consider users' posts over time as input sequences. Although other baselines perform better than STATENet by exploiting sequential data, our model surpasses them by learning the temporal dynamics of BD symptoms. BD symptom task: The results show that our proposed model improves BD symptom prediction performance compared to UoS <cit.>. While our model directly utilizes contextualized post embeddings to predict BD symptoms in each post, UoS considers post sequences. This implies that considering the post sequence may interfere with BD symptom prediction of each post since bipolar patients have a characteristic of rapidly changing mood. Multi-task learning: To evaluate the performance of multi-task learning, we train the proposed model separately for each task (i.e., Single-task learning (STL)). We find that the proposed multi-task learning (MTL) improves prediction performances from single-task learning (STL) in both BD symptom identification and future suicidality prediction tasks by achieving 61.24% and 82.30%, respectively. This suggests that jointly learning BD symptom information helps forecast future risks of suicidality by sharing informative presentations and parameters. §.§ Ablation Study Model Component. We perform an ablation study to examine the effectiveness of each component. Applying the uncertainty weight loss function is a common technique in multi-task learning that can address the challenge of tuning loss coefficients for different tasks with different prediction levels. In our case, the two tasks, suicidality prediction, and BD symptom classification, show different granularities, hence we use the uncertainty parameters to balance their weights during training. This can prevent one task from dominating the objective function and improve the model's overall performance. As shown in Table <ref>, there is a significant drop in performance when the uncertainty weight loss is not used. Overall, Table <ref> shows that the performance is inferior when the self-attention mechanism is applied instead of the proposed temporal symptom-aware attention mechanism. By adding symptom-specific information, we suppose the model can learn that each symptom contributes differently to future suicidality over time. This indicates that not only understanding time intervals but also mood swings over time is essential to predict the future suicidal risk of BD users. Bipolar Symptom. We conjecture that the effects of mood and somatic symptom information of BD for predicting future suicidality would differ. To validate this, we train the multi-task model with either mood or somatic symptoms. Note that `w/o somatic' refers to a case where only information on the six mood symptoms is included, but any information on the somatic symptoms is excluded. On the other hand, `w/o mood' refers to a case where only information on the somatic symptoms is included, but any information on the mood symptoms is excluded. As shown in Table <ref>, the multi-task model trained with only mood symptoms (`w/o somatic') achieves a higher performance (81.77% of F1-score) than the model trained with only somatic symptoms (`w/o mood'). Although somatic symptoms are prominent suicidality for BD patients <cit.>, they appear less frequent than mood symptoms. Thus, the improved performance in the final model (MTL All) signifies that both symptoms play a complementary role in solving the main task. §.§ Observational and Predictable Periods   As illustrated in Section <ref>, we conduct experiments to find how many months l ∈{ 1,3,6,12} we should observe in predicting the future suicidality on the next period m ∈{1,3,6,12}. Figure <ref> shows the weighted average F1 score and recall for predicting suicidality in 1 month by training l ∈{ 1,3,6,12} months. We find that the performance increases as more past days are trained, but no improvement beyond 6 months. This implies that the 12 months observation period is too long to capture informative recent patterns to predict future suicidality. However, the longer the future period to be predicted, the worse the performance is trained for 6 months, as shown in Figure <ref>. We interpret that this is because the mood swings of individuals with BD tend to be radical and impulsive <cit.>, limiting the model's ability to predict the far distant future from historical records. Therefore, the performance of the proposed model is the best when (l,m) = (6,1). According to previous studies, BD patients hospitalized by suicide attempts are likely to commit suicide again between 3 and 6 months after discharge <cit.>. The proposed model can help offer proper treatment by diagnosing BD symptoms and suicidality early. §.§ Interpretability of the Model To demonstrate the interpretability of the proposed model by analyzing attention weights related to BD symptoms, we examine two example sequences, s_1 and s_3, extracted from the same user A, where their levels of future suicidality are different. In particular, we compare the proposed model with and without the temporal symptom-aware attention mechanism (i.e., `MTL All' vs. `MTL w/o Temp att') in Figure <ref>. As shown in Figure <ref>(a), both models correctly predict BD symptoms for each post, but only `MTL all' correctly identifies future suicidality of s_1 and s_3. This implies that temporal tracking is useful since bipolar patients have a characteristic of rapid mood swings. We further analyze how the model assigns the temporal symptom-aware attention a_t^i (in Equation <ref>) to each post over time in predicting future suicidality. In the s_1 sequence, `MTL All' assigns a lower attention score to p_1^1, whereas giving more attention to p_2^1-p_4^1. It indicates that irritability/somatic and depressed/psychosis are crucial for predicting future suicidality <cit.>. Also, we find a similar tendency for the BD symptoms' attention weights in Figure <ref>(b); as the risk of suicide increases, the importance of attention weights to anxiety decreases, while the importance of depressed and irritability increases significantly. Notably, s_3 has identical posts with s_1 (i.e., p_1^3-p_4^3), but the model differently gives attention weights to them. By focusing on a manic symptom in p_4^3, the model forecasts lower suicidality than s_1, which implies user A's mental status is shown to be improved. Moreover, we find that our model tends to focus on recent posts, giving more attention to the manic symptoms of p_4^1, which is a new observation compared to a previous work that claimed that manic episodes less contribute to future suicidality <cit.>; a future validation is required. We believe the proposed future suicidality prediction model with an interpretable function, as exemplified in Figure <ref> can be used for screening and identifying individuals with mental illness on social media to prioritize early intervention for clinical support. § CONCLUDING REMARKS In this study, we proposed a novel end-to-end multi-task learning model to jointly learn (i) the future suicidality and (ii) BD symptoms of individuals with BD over time. The proposed model for predicting the future suicidality using temporal symptom-aware attention can (i) accurately capture BD transition patterns and (ii) outperform the state-of-the-art methods for detecting future suicidality for BD patients. We plan to open our codes and dataset, which contains the future suicidality and BD symptom labels, validated by two psychiatrists. The proposed model and dataset have great utility in identifying the potential suicidality of users in the future, hence preventing individuals from potential suicidality at an early stage. Clinical Applicability. As an interdisciplinary study, our work contributes to both machine learning and Psychiatry communities. Most BD applications do not provide a suicide warning function and rely solely on self-assessment. On the other hand, we proposed an interpretable and automatic model for predicting the future suicidality of BD patients by introducing a temporal symptom-aware attention mechanism based on sequential context learning. With the advantage of the proposed model that can help identify complex mood changes and future suicidality in a real context, it can be used for monitoring risks of suicidality for those who are underrepresented in a clinical setting, such as minorities, uninsured people, or patients with a lack of insight. In addition, more concise and timely tracking of mood symptoms can reduce the diagnosis duration, leading to shorter treatment periods and better prognosis in BD patients. Our dataset can help to establish a prevention system for early detection and immediate intervention of BD patients with high-risk suicidality for clinical support. This will enable us to reduce mental health-related social costs and promote public health. Limitation. Assessing future suicidality on social media can be subjective <cit.>, and the analysis of this paper can be interpreted in various ways by the researchers. The experiment data may be sensitive to demographic, annotators, and media-specific biases <cit.>. Although we carefully selected the users who have been clinically diagnosed as BD based on their Reddit posts <cit.>, possibly noisy data can be included if the users misunderstood their diagnoses or did not tell the truth. Moreover, there might be linguistic discrepancies between Reddit and other social media users (e.g., Twitter users) who self-reported BD diagnosis. Lastly, using digital-trace data from social media for predicting mental health can cause low performance depending on the condition of a clinical setting <cit.>. Future Work. Unfortunately, BD is often misdiagnosed as a depressive disorder since the depressive phase occupies most of the mood episodes <cit.>. It has been reported that 9 years were taken on average to clinically diagnose BD <cit.>, which can potentially delay treatment opportunities and increase the risk of suicide <cit.>. Further research could focus more on comparing similar symptoms in different diagnoses to make precise detection (e.g., depressed mood in major depression). We also aim to apply the proposed model to data collected in the clinical field, such as EMR data, to validate the proposed model's effectiveness to determine the potential for practical applications. We would like thank Dr. Ji Hyun An (M.D., Ph.D.) and Myung Hyun Kim (M.D.) for their thoughtful advice on experimental design and clinical data validation. We also thank Chaewon Park for helping data annotation and statistical analysis. This research was supported by the Ministry of Education of the Republic of Korea and the National Research Foundation of Korea (NRF-2022S1A5A8054322), and by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. 2023R1A2C2007625). ACM-Reference-Format § ANNOTATION CRITERIA §.§ Annotation Process In this study, we aim to develop a model to predict the risks of future suicidality of bipolar disorder (BD) patients using past social media data. To this end, we create a BD dataset that includes the labels of future suicidality and bipolar symptoms clinically verified by psychiatrists. This section briefly states how we create a BD dataset based on our annotation guideline. As our first step, we collect social media posts published between January 1, 2008, and September 31, 2021, from three bipolar-related subreddits using the open-source Reddit API [<https://www.reddit.com/dev/api/>]. Among the collected posts, we used posts written by users who have been diagnosed with BD by professionals <cit.> and users who reported BD diagnosis (e.g., “I was diagnosed with Bipolar type-I last year.”). Based on the criteria, our dataset contains 7,592 posts published by 818 users, i.e., BD patients. For the preprocessing, we anonymize the collected posts and convert the texts. Then we conduct annotation with four trained annotators using the open-source text annotation tool Doccano in Figure <ref>. §.§ Annotation Guideline For our annotation, we consider three different label categories that include the diagnosed BD type (e.g., BD-I, BD-II), the BD symptom (e.g., manic, anxiety), and the level of suicidality (e.g., ideation, attempt). Discussion with the psychiatrist selected the criteria of three different label categories. We briefly describe the details of the annotation guideline in the following subsections. §.§.§ Diagnosed Bipolar Disorder Types To use only posts of users diagnosed with bipolar by medical institutions, we classify users whose self-reports are bipolar related to diagnoses (e.g., “Hey, I'm diagnosed bipolar II posts being diagnosed with schizophrenia.”). We label users into three BD diagnosis types, including Bipolar Disorder-I (BD-I), Bipolar Disorder-II (BD-II), and Not Otherwise Specified Bipolar Disorder (NOS). Table <ref> describes the definition of three BD diagnosis types inspired by the Diagnostic and Statistical Manual of Mental Disorders (DSM-5) <cit.> and the Statistical Classification of Diseases and Related Health Problems (ICD-10) <cit.>, which classify bipolar disorder into several sub-types based on the frequency and intensity of episodes. For example, BD-I requires at least one manic episode, while BD-II shows at least one hypomanic and one major depressive episode during their lifetime. Moreover, we annotate NOS when a patient shows some symptoms of BD but does not necessarily satisfy all the criteria. §.§.§ Bipolar Disorder Symptoms We can filter out posts without BD diagnosis type from Appendix <ref>. We annotate BD symptoms for each post that fits the requirement to track users diagnosed as bipolar with time series. Based on the BISS(Bipolar Inventory of Symptoms Scale) <cit.> and discussions with the psychiatrist, our annotation criteria came out. We annotate users in case bipolar-related symptoms are exposed. Table  <ref> describes the definition and the corresponding examples of BD symptoms used in this study. For more systematic annotation, we consider mood and somatic symptoms. We first annotate the most prominent mood symptoms among Depressed, Manic, Anxiety, Remission, Irritability and Other. Additionally, we add Other to cover moods that do not fall into the other five mood symptoms. Moreover, simultaneously with some posts, we annotate an additional somatic symptom label in option with Somatic complaint, Psychosis, and Both, which are considered vital factors of suicidality  <cit.>. While annotating, we delete advertising posts that do not fit the purpose. §.§.§ Risks of Suicidality To determine the user's different levels of suicidality while tracking BD symptoms, we also simultaneously annotate the risks of suicidality. Based on the post contents, we label the risk of suicidality, which fits the current situation. We utilize the existing criteria from <cit.> that provide five levels of suicidality, including No Risk (NR), Suicide Indicator (IN), Suicidal Ideation (ID), Suicidal Behavior (BR), and Actual Attempt (AT), based on the Columbia Suicide Severity Rating Scale (C-SSRS) <cit.>. For our annotation, we merge No Risk with Suicide Indicator since people with bipolar disorder are already considered more at risk than the general population in suicide <cit.>. In the suicide indicator level, posts reveal risk indicators such as a history of divorce, chronic illness, or suicide of a loved one. Suicidal ideation posts mention the willingness to take own life (e.g., “I still want to die. I still should die.”), and suicidal behavior posts contain actions with higher risks, such as planning a suicide attempt. Posts show deliberate action at the actual attempt level that can lead to death (e.g., “I failed to commit suicide last night, what do I do now?”). Table  <ref> details each category's descriptions and examples of Risks of Suicidality. § EXPERIMENT SETTINGS We tune hyperparameters based on the highest F1 score obtained from the cross-validation set for the models. We use the grid search to explore the dimension of hidden state H ∈{32, 64, 128, 256, 512}, number of LSTM layers n ∈{1,2,5}, dropout σ∈{0.1, 0.2, 0.3, 0.4, 0.5}, initial learning rate lr ∈{1e-5, 2e-5, 3e-5, 5e-5}, and control the parameter for ordinal regression α∈{0, 0.2, ..., 3.8}. The optimal hyperparameters were found to be: H = 512,n = 2, σ = 0.1, lr = 1e-5, and α = 1.8. We implement all the methods using PyTorch 1.6 and optimize with the mini-batch AdamW <cit.> with a batch size of 64. We use the Exponential Learning rate Scheduler with gamma 0.001. We train the model on a GeForce RTX 2080 Ti GPU for 200 epochs and apply early stopping with patience of 20 epochs.
http://arxiv.org/abs/2307.02426v1
20230705164817
Multi-Height Observations of Atmospheric Gravity Waves at Solar Disk Center
[ "Oana Vesa", "Jason Jackiewicz", "Kevin Reardon" ]
astro-ph.SR
[ "astro-ph.SR" ]
Oana Vesa ovesa@nmsu.edu 0000-0001-6754-1520]Oana Vesa Department of Astronomy, New Mexico State University, P.O. Box 30001, MSC 4500, Las Cruces, NM 88003-8001, USA 0000-0001-9659-7486]Jason Jackiewicz Department of Astronomy, New Mexico State University, P.O. Box 30001, MSC 4500, Las Cruces, NM 88003-8001, USA 0000-0001-8016-0001]Kevin Reardon National Solar Observatory, Boulder, CO 80303, USA Atmospheric gravity waves (AGWs) are low-frequency, buoyancy-driven waves that are generated by turbulent convection and propagate obliquely throughout the solar atmosphere. Their proposed energy contribution to the lower solar atmosphere and sensitivity to atmospheric parameters (e.g. magnetic fields and radiative damping) highlight their diagnostic potential. We investigate AGWs near a quiet Sun disk center region using multi-wavelength data from the Interferometric BIdimensional Spectrometer (IBIS) and the Solar Dynamics Observatory (SDO). These observations showcase the complex wave behavior present in the entire acoustic-gravity wave spectrum. Using Fourier spectral analysis and local helioseismology techniques on simultaneously observed line core Doppler velocity and intensity fluctuations, we study both the vertical and horizontal properties of AGWs.Propagating AGWs with perpendicular group and phase velocities are detected at the expected temporal and spatial scales throughout the lower solar atmosphere. We also find previously unobserved, varied phase difference distributions among our velocity and intensity diagnostic combinations. Time-distance analysis indicates that AGWs travel with an average group speed of 4.5 km s^-1, which is only partially described by a simple simulation suggesting that high-frequency AGWs dominate the signal. Analysis of the median magnetic field (4.2 G) suggests that propagating AGWs are not significantly affected by quiet Sun photospheric magnetic fields. Our results illustrate the importance of multi-height observations and the necessity of future work to properly characterize this observed behavior. § INTRODUCTION The solar atmosphere is a conducive environment for the generation and propagation of a plethora of wave motions that co-exist and interact with one another, including atmospheric gravity waves (AGWs). To avoid confusion with the interior standing g-modes (or internal gravity waves), we use the term AGWs to denote propagating gravity waves throughout the solar atmosphere. Along with other commonly studied waves, AGWs might play an important role in transporting energy and heating the lower solar atmosphere. AGWs are buoyancy-driven waves with gravity acting as their restoring force. These low-frequency waves are believed to be excited stochastically by turbulent convection below the stably stratified surface. The low temporal frequencies (1-4 mHz), short vertical wavelengths, and transverse propagation has made it difficult to observationally measure and track these waves. A detailed understanding of these waves would not only provide further insight into their behavior but also a window into the complex dynamics of the solar atmosphere and their interactions with the magnetic field. Studies of AGWs in the field of solar physics began with <cit.> and their proposed contribution to the coronal heating problem. Since then, phase difference analysis and other diagnostics using intensity and velocity fluctuations sampled simultaneously at different atmospheric heights near disk center on the quiet Sun <cit.> have provided observational evidence for the existence of AGWs from the low photosphere to the low chromosphere. An extensive theoretical framework on the characterization and energy dissipation of these waves with radiative damping was established in the seminal works of <cit.>. These waves are expected to be affected by severe radiative damping in the lower photosphere which suppresses their propagation <cit.>. However, the observational detection of these waves throughout the lower solar atmosphere indicates that they have enough energy to overcome this dissipative process. With previous observational studies of AGWs focused near disk center, we lack insight into their transverse nature. Numerical simulations by <cit.> show that AGWs can reach horizontal velocities of 5-6 km s^-1 in comparison to vertical velocities of 1-2 km s^-1, indicating the importance of their horizontal propagation. In recent years, the effects of the magnetic field on these waves have been explored <cit.>, indicating their potential as diagnostics for the average magnetic field. Realistic numerical simulations of the magnetized solar atmosphere carried out using CO5BOLD code <cit.> by <cit.> and <cit.> demonstrate that AGWs are generated abundantly and propagate irrespective of the field strength and strong radiative damping in the low photosphere. These simulations show that the properties of the magnetic field in the upper photosphere can significantly modify their propagation. Regions of strong, vertical magnetic fields could act to suppress the upward propagation of AGWs <cit.> while horizontal magnetic fields would allow these waves to eventually reach chromospheric heights <cit.> and encounter their wave breaking heights, as discussed in <cit.>. Analysis by <cit.> and <cit.> indicate that even weak vertical magnetic fields can modify the properties of AGWs irrespective of radiative damping, reflecting them back down to the lower solar atmosphere as slow mode MHD waves. These simulations seemingly demonstrate that AGWs are strongly affected by the magnetic field topology. The only observational indication of the suppression of propagating AGWs at locations of strong magnetic flux was given by <cit.> using high-resolution, multi-wavelength data from the Interferometric BIdimensional Spectrometer <cit.> and Michelson Doppler Imager <cit.>. The authors speculated this was evidence of the mode conversion of AGWs into Alfvén waves. To properly characterize the behavior of AGWs, high-resolution, multi-wavelength narrowband imaging of the lower solar atmosphere is necessary. The large horizontal propagation speeds associated with these modes and their potential as magnetic field diagnostics necessitates simultaneously observed multi-height velocity fluctuations derived from spectral lines that have relatively small formation height separations and are not overly sensitive to the magnetic field. Detailed studies of these waves at locations other than disk center and around more magnetic environments are necessary to provide insight into their large horizontal velocities and magnetic character. In the first of several papers, we revisit AGWs at quiet Sun disk center equipped with high spatial and temporal resolution, multi-wavelength ground based data in tandem with co-aligned space based data spanning the lower solar atmosphere. We employ Fourier analysis and local helioseismology techniques on derived line core Doppler velocities and intensities to probe the characteristics of AGWs. This detailed paper will illustrate the behavior of AGWs seen side by side through both intensity and Doppler velocity diagnostics for the first time. It will serve as a baseline that can be referenced for upcoming datasets exploring the behavior of AGWs at different viewing angles on the Sun. Upcoming papers will show consistent line pairs, color scales, analysis, reduction processes, and instruments to facilitate comparisons in order to better understand AGW behavior throughout the lower solar atmosphere. While our main interest is to study propagating AGWs and compare our observations to established simulations and theory, it would be a disservice to not spend some time discussing the neighboring wave regimes. The complexity of the solar atmosphere and the relative lack of detailed AGW observations lends itself to a discussion of the full acoustic-gravity wave spectrum so that we can anchor and compare our results to previous observations. Moreover, such detailed studies are imperative to examine what information we could learn from multi-height observations with new global helioseismology projects and with the next generation solar telescopes, such as the Daniel K. Inouye Solar Telescope <cit.>. We anticipate that DKIST will provide many new multi-height observations that will help us better understand the dynamics of the solar atmosphere. The paper is structured as follows. In Section <ref>, we discuss our 2.75 hour long co-spatial and co-temporal high-resolution, multi-wavelength ground and space based observations. We derive line core Doppler velocities and line minimum intensities for several photospheric spectral lines. In Section <ref>, we discuss the isothermal k_ h-ν diagnostic diagram and dispersion equation typically used to differentiate the various wave regimes. We use Fourier analysis to construct phase difference and coherence spectra between combinations of spectral diagnostics to investigate the behavior of AGWs, and we estimate the separation in formation height between diagnostics. In Section <ref>, we highlight the importance of multi-height observations and use local helioseismology techniques, such as time-distance analysis, to explore the horizontal properties of AGWs. The conclusion follows in Section <ref>. § OBSERVATIONS lccccccc[htb!] 1 Observed Spectral Line Parameters 3cIBIS 3cSDO - HMI/AIA 2-4 6-8 Fe1 7090 K1 7699 Fe1 5434 Fe1 6173 1700 1600 Cadence [s] 11.88 11.88 11.88 12.0 12.0 12.0 g_ eff 0.0 1.33 0.0 2.5 Formation Height [km] 200 - 250^1,2 450 - 650^3,4,5 500 - 650^6,7 100 - 150^8,9 360± 325^10 430± 185^10 (1) <cit.>; (2) <cit.>; (3) <cit.>; (4) <cit.>; (5) <cit.>; (6) <cit.>; (7) <cit.>; (8) <cit.>; (9) <cit.>; (10) <cit.> The formation heights listed for IBIS are average quiet Sun estimates based on the formation of the velocity signal in the line core above the base of the photosphere. We observed a quiet Sun region near disk center (110, 21) for 2.75 hours starting at 14:15:11 UT on 25 April 2019 using the Interferometric BIdimensional Spectrometer <cit.> that was installed at the Dunn Solar Telescope (DST) in Sunspot, New Mexico along with the DST's high-order adaptive optics system <cit.>. IBIS sampled the absorption line profiles of Fe1 7090 Å, K1 7699 Å, and Fe1 5434 Å with a spatial sampling of 0096 per pixel and an overall time cadence of 11.88 s. The time delay between the sequential sampling of the three line profiles is 0.0, 3.26, and 7.24 s, respectively. In addition to the narrowband data, IBIS simultaneously recorded whitelight images centered on 7200 Å <cit.>. The Rapid Oscillations in the Solar Atmosphere instrument <cit.> also ran simultaneously, but we do not use those datasets in this paper. To complement our ground based observations, we use space based data from the Helioseismic and Magnetic Imager <cit.> and the lower ultraviolet Atmospheric Imaging Assembly <cit.> passbands on board the Solar Dynamics Observatory <cit.>. We use HMI's Dopplergram (Fe1 6173 Å) to provide an independent measure of the line of sight velocities in the lower photosphere and the line of sight magnetogram for information on the lower photospheric magnetic field. The AIA 1600 Å and AIA 1700 Å intensity sequences are used to probe the upper photosphere and low chromosphere. Because these spectral lines are well studied, their estimated formation heights are relatively well known <cit.> and can be used to place observational constraints on our IBIS data products. The SDO data products have initially been interpolated to have a spatial sampling of 06 per pixel and time cadence of 12.0 s following Rob Rutten's SDO alignment IDL package [Rob Rutten's IDL software to co-align SDO image sequences can be https://robrutten.nl/Recipes_IDL.htmlfound here.]. An overview of our observed spectral lines can be found in Table <ref>. From here onward, we will drop the angstrom designation when discussing the wavelengths used in our study. §.§ Data Reduction Standard flatfielding, gain, and dark calibrations were applied to both the whiteband and narrowband IBIS channels. Additional corrections for any time-dependent shifts found between the IBIS channels, prefilter transmission curves, and Fourier filtering small-scale interference fringe patterns were implemented. The spatially dependent systematic wavelength blueshift caused by the collimated mounting of the Fabry-Pérot spectrometer was also corrected. We used the nearest in time and co-spatial HMI continuum intensity images to co-align the IBIS whitelight channel and used grid images to remove optical distortions. This process corrected for any residual image motion and distortion caused by atmospheric seeing and optics without removing solar flows. We used the whitelight channel as a reference to map the narrowband, HMI, and AIA data to the same image geometry. §.§ Data Properties We compute line core Doppler velocities and line minimum intensities for each spectral line scan. We mapped the spectral data onto an evenly spaced wavelength grid and derived these data products by fitting a 2nd order polynomial to 5 points around the line minimum. The Doppler velocities were then converted into physical units of km s^-1. Simultaneous snapshots of the derived line core Doppler velocities and line minimum intensities can be seen in the top and bottom row of Fig. <ref>, respectively. The comparability of our observables can be demonstrated by computing velocity amplitude spectral densities as shown in Figure <ref>. We see similarities in the overall shape of the velocity amplitude profiles between our IBIS line core Doppler velocities and HMI's Dopplergram (Fe1 6173). We find a good match between the lower photospheric data given by HMI (red) and Fe1 7090 (gray) and the upper photospheric data given by K1 7699 (black) and Fe1 5434 (blue) in the evanescent regime near the f-mode and p-modes. At large frequencies (5-10 mHz), our IBIS lines show strong velocity signals, falling off slower than the HMI Dopplergram. In this same regime, we find that the upper photospheric Fe1 5434 attains the largest amplitude while the lower photospheric HMI signal has the smallest amplitude, which is expected given their relative formation heights and the upward decrease in densities (see Table <ref>). At low frequencies within the AGW regime, we see the opposite behavior with HMI having the highest amplitude. This might be due to the fact that Fe1 6173 is more convectively dominated as it forms lower in the photosphere. To facilitate direct comparisons between our ground and space based observations and reduce computational time, we rebinned the IBIS data to match the spatial sampling of HMI and AIA (06 per pixel). All SDO data products were ultimately interpolated to have the same cadence as IBIS. Prior to interpolation, our IBIS dataset had a Nyquist frequency of 42.1 mHz, frequency resolution of 101 µ Hz, Nyquist wavenumber of 45.8 Mm^-1, and wavenumber resolution of 0.09 Mm^-1. Afterward, the dataset had a Nyquist wavenumber of 7.3 Mm^-1 and wavenumber resolution of 0.09 Mm^-1. As AGWs have typical horizontal wavenumbers between 2-4 Mm^-1, the rebinning process preserved the spatial scale at which we can resolve them. We also have sufficient frequency resolution to resolve these long-period oscillations. We note no quantitative differences in the results of the overall wave behavior when comparing our spatially interpolated IBIS dataset to the original as well as when we temporally interpolate our IBIS dataset to match the original cadence of HMI. Our analysis accounts for the time delay caused by the sequential sampling of the different spectral line cores. Based on the properties of Fourier transforms, the phase difference of the time lag is a linear function of frequency, so it can be added to the final azimuthally averaged phase difference spectra instead of interpolating all the time series onto one common temporal grid. § WAVE ANALYSIS We use Fourier analysis on the line core Doppler velocity (V) and line minimum intensity (I) time series to study AGWs in detail. The three-dimensional fast Fourier Transform (FFT) algorithm is used to compute phase difference and coherence spectra between combinations of spectral lines as shown in <cit.>. The coherence ranges between zero and one, where phase differences associated with a high coherence value are considered significant. Prior to computing the FFT, we applied a running time difference on the high cadence datasets, where each image in the data cube is subtracted from the successive image, in order to remove any stationary signals. We tested various detrending and filtering methods, and the results are all quantitatively similar within the uncertainties. The Fourier products are azimuthally averaged in the k_ x-k_ y plane and illustrated on a horizontal wavenumber-frequency (k_ h-ν) diagram (where ν = ω / 2π). The wave behavior of the acoustic-gravity wave spectrum is differentiated by solving the local dispersion equation for a compressible, gravitationally stratified isothermal medium, which is represented by k_ z^2 = (ω^2 - ω_ ac^2)/c_ s^2 - (ω^2 - N^2)k_ h^2/ω^2, where k_ h is the horizontal wavenumber (k_ h^2 = k_ x^2 + k_ y^2), ω is the angular frequency, c_ s is the photospheric sound speed, ω_ ac is the acoustic cutoff frequency, and N is the Brunt-Väisälä (buoyancy) frequency. The dispersion equation separates oscillatory behavior into three distinct wave regimes: propagating acoustic waves (k^2_ z≥ 0) at large frequencies and large horizontal wavenumbers; evanescent or standing waves (k^2_ z≤ 0); and propagating AGWs (k^2_ z≥ 0) at small frequencies and modest horizontal wavenumbers. We discriminate between upward propagating AGWs and acoustic waves by their contrasting phase properties. AGWs are typically associated with a negative phase difference (as a result of their orthogonal group and phase velocities). Thus, an AGW carrying energy upwards throughout the solar atmosphere will be observed with a negative phase difference <cit.>. The phase difference spectrum in Fig. <ref> clearly shows this defining characteristic where AGWs are displayed with negative phase differences seen in orange. As there exists a parallel relationship between the phase and group velocities of acoustic waves, a propagating acoustic wave carrying energy upward will display a positive phase difference. The following regions of interest that are mentioned throughout this paper are labeled in Fig. <ref>: A for propagating AGWs; B for the wedge of evanescent waves under 2 mHz and horizontal wavenumbers between 0 - 1.6 Mm^-1; C for propagating acoustic waves; E for evanescent waves; and F for the f-mode. §.§ Phase Difference and Coherence Spectra We compute velocity-velocity (V - V), intensity-intensity (I - I), and intensity-velocity (I - V) phase difference and coherence spectra for combinations of simultaneously observed spectral diagnostics. These phase relations allow us to probe the behavior and vertical propagation of AGWs throughout the lower solar atmosphere. The first listed spectral diagnostic in the titles of the V - V and I - I phase spectra indicates the typically higher forming line. The computations were carried out in such a way as to highlight the characteristic signature of AGWs described in Section <ref>. §.§.§ Velocity - Velocity Phase Difference Spectra We examine the propagation of AGWs using V - V phase difference spectra as shown in Fig. <ref>, which displays the phase lag between the measured line core velocity fluctuations for combinations of IBIS spectral diagnostics.The V - V phase difference spectra for IBIS - HMI (Fe1 6173) combinations are provided in the appendix in Fig. <ref>. The phase difference spectra (top row) and corresponding magnitude-squared coherence spectra (bottom row) are displayed in order of increasing formation height difference based on the measured separations calculated in Table <ref>. Following from the distinctive propagation properties in the acoustic-gravity wave spectrum, acoustic waves (Region C) propagating upward through the atmosphere will show up with a positive phase difference (blue) while propagating AGWs (Region A) carrying energy upward will show up with a negative phase difference (orange). In Region A, we clearly detect the signature of propagating AGWs carrying energy upwards with phase differences as much as -20. Their observable signature at low temporal frequencies (≲ 4.5 mHz) is prevalent for all horizontal wavenumbers with high coherence values. We see varied phase difference distributions in Region A for all IBIS combinations. In the Fe1 5434 - K1 7699 pair, Region A shows a propagating AGW signal confined to a positive slope of some width spanning all horizontal wavenumbers for increasing frequencies with relatively high coherence values. Between 2-4 Mm^-1, we see a concentration of slightly larger negative phase differences exceeding -20. Below this signature, the phase differences are essentially 0. In contrast, the K1 7699 - Fe1 7090 combination shows a comparable AGW signature at lower frequencies. At higher frequencies, the phase differences are positive, which is indicative of propagating AGWs carrying energy downwards. This phase difference distribution mirrors that seen in Fe1 5434 - K1 7699. While we detect similar overall wave behavior in Region A, we note slight differences present in the phase difference distribution between the IBIS - IBIS and IBIS - HMI combinations. IBIS - HMI combinations appear to have slightly larger negative phase differences. These phase difference distributions take on a defined oval shape for frequencies below 3 mHz and horizontal wavenumbers between 1-6 Mm^-1. This is in contrast to that seen in the aforementioned IBIS line combinations. However, we note that the IBIS - IBIS combinations that include Fe1 7090 look comparable to the IBIS - HMI combinations when Fe1 6173 is substituted. This behavior seems not to be related to the measured separation heights as we see this for line combinations with large and small measured formation height differences (Fe1 5434 - Fe1 7090 versus Fe1 7090 - Fe1 6173). Phase analysis of our multi-height observations also provide additional insight into the acoustic-gravity wave spectrum as a whole. While we detect increasing positive phase differences with increasing formation height differences for propagating acoustic waves in Region C, this behavior does not hold for AGWs in Region A. The coherence in Region C also varies more drastically than in Region A: it decreases with increasing horizontal wavenumber and decreases rapidly with increasing frequency dropping below 0.5 at 8-9 mHz. In general, we see phase differences close to 0 in Region E, which are expected for non-propagating evanescent waves. However, in the IBIS - IBIS combinations involving Fe1 7090 as well as the IBIS - HMI combinations with large measured formation height differences, a cluster of negative phase differences is clearly present in Region B with significantly large coherence values. These negative phase difference values are comparable with the AGW signature. §.§.§ Intensity - Intensity Phase Difference Spectra We examine the intensity perturbations produced by propagating AGWs using I - I phase difference spectra. We show phase difference spectra between IBIS - IBIS combinations in Fig. <ref>. AIA - AIA and AIA - IBIS combinations are provided in the appendix in Fig. <ref>. The labeled regions are consistent with the previous section, but these plots cover a greater range of phase differences. Because we are sampling the line minimum intensities that were used to derive the line core velocities, we might theoretically expect to detect similar phase difference information. However, the intensity signal is difficult to interpret as cleanly as velocity due to the fact that it is a complex agglomeration of density, temperature, and opacity effects. Radiative damping effects also need to be considered when analyzing these phase differences. In Region A, we detect significant negative phase differences reaching -40 congregated at horizontal wavenumbers 1-4 Mm^-1 for all height combinations. In contrast to the coherence spectra shown in Fig. <ref>, the overall coherence is lower and decreases rapidly with increasing horizontal wavenumber and frequency. This behavior also holds for the AIA - IBIS combinations. Within Region A, we notice variations and similarities present in the phase difference distribution among IBIS - IBIS and AIA - IBIS combinations. Among our IBIS diagnostics, K1 7699 - Fe1 7090 shows overall smaller negative phase differences around -15 mainly at higher frequencies. This is comparable to the behavior captured in the AIA 1600 - AIA 1700 pair, albeit with larger coherence values. A striking phase difference distribution can be seen in the AIA - K1 7699 combinations, which is not found in any of the other diagnostic combinations. This negative phase difference distribution is restricted to frequencies smaller than 2 mHz and horizontal wavenumbers 1-3 Mm^-1. Additional differences are visible when compared to Fig. <ref>. Within Region E, we detect mainly positive phase differences with large coherence values which might be due to radiative damping effects. Some diagnostic combinations display significant positive phase differences within Region F in addition to well-defined pseudo p-mode ridges within Region C. These pseudo p-mode ridges show increasing positive phase differences with increasing separation height and are well outlined in the coherence maps, but the coherence decreases rapidly with increasing frequency. However, these features are not present in all the line combinations, such as K1 7699 - Fe1 7090 or AIA 1600 - AIA 1700. Additionally, K1 7699 - Fe1 7090 displays negative phase differences underneath Region F with relatively large coherence values comparable to the negative phase differences found in Region A. We note that the pseudo p-mode ridges are not visible within Region C for these mentioned line combinations even when truncating the color bar. §.§.§ Intensity - Velocity Phase Difference Spectra We present the phase lag between the derived IBIS line core intensity and velocity fluctuations in the I - V phase difference and magnitude-squared coherence spectra in Fig. <ref>. For comparison, we also provide the HMI continuum and Dopplergram I - V phase difference and magnitude-squared coherence spectrum in the appendix in Fig. <ref>. I - V phase difference spectra hold the underlying assumption that the signals form at the same relative height in the atmosphere which might not be accurate. The phase difference spectra show distinct phase regimes with varying coherence levels corresponding to different wave behavior. Both ground and space based diagnostics display the same overall wave behavior in the different wave regimes: negative phase differences within Region A, Region C, Region E, and Region F; and positive phase differences within Region B. However, the HMI diagnostics display overall smaller negative phase differences for Region A and C, and the pseudo p-mode ridges present from Region E to C are more defined. While we see little to no change in Region B between our IBIS diagnostics, this region appears to have larger phase differences in the HMI combination. The largest negative phase differences for all diagnostics appear to be associated with Region F. In contrast to the large coherence values present in Region A in the HMI diagnostics, the overall coherence is low for all IBIS diagnostics. The largest coherence values can be attributed to Regions C, E, and F. Within Region A, the coherence levels appear to decrease with increasing average formation height. In contrast to this behavior, the coherence in Regions E and F appear to increase with average formation height. The coherence attributed to the pseudo p-mode ridges within Region C decreases rapidly with increasing frequency. We also see that the coherence levels present in Region B are low. §.§ Estimated Separation Heights Between Spectral Diagnostics lr@ – lDD[htb!] 2 Estimated formation height differences found between our line pairs 2cLine Pair 2cΔ ϕ () 2cΔ z (km) IBIS Fe1 7090 HMI 5.9 ± 5.2 25.6 ± 22.7 IBIS Fe1 5434 K1 7699 22.3 ± 6.3 94.9 ± 26.4 Δ ϕ (V - V) K1 7699 Fe1 7090 38.7 ± 6.3 167.9 ± 44.0 IBIS K1 7699 HMI 45.8 ± 7.2 199.0 ± 50.6 IBIS Fe1 5434 Fe1 7090 56.5 ± 9.3 247.1 ± 68.7 IBIS Fe1 5434 HMI 61.1 ± 9.9 269.5 ± 80.2 IBIS AIA 1600 AIA 1700 14.5 ± 3.8 65.6 ± 28.1 IBIS K1 7699 Fe1 7090 16.6 ± 6.1 76.3 ± 40.7 IBIS Fe1 5434 AIA 1600 19.7 ± 6.4 85.8 ± 32.1 IBIS Fe1 5434 AIA 1700 37.9 ± 7.9 166.4 ± 52.7 Δ ϕ (I - I) AIA 1700 K1 7699 38.2 ± 9.3 168.7 ± 61.8 IBIS AIA 1700 Fe1 7090 39.7 ± 8.2 179.6 ± 72.5 IBIS AIA 1600 Fe1 7090 43.7 ± 13.3 205.1 ± 106.5 IBIS AIA 1600 K1 7699 45.3 ± 11.0 206.7 ± 91.4 IBIS Fe1 5434 K1 7699 50.6 ± 8.0 227.5 ± 86.2 IBIS Fe1 5434 Fe1 7090 72.1 ± 10.6 323.6 ± 116.3 The measured phase differences (Δ ϕ) are averages found in the propagating acoustic wave regime above the acoustic cutoff frequency within the range of 6-9 mHz and 1-3 Mm^-1. The estimated separation in formation height (Δ z) is calculated using Eqn. <ref>. The total phase speed for this frequency and horizontal wavenumber range is 11.6 kms^-1. To interpret the observed phase differences, we need to understand what atmospheric regions our spectral lines might sample. While we can use established quiet Sun average formation heights (see Table <ref>), the solar atmosphere is highly corrugated, and these values are only derived from the respective line core velocity signal. The line core velocity and line minimum intensity signal might not sample the same atmospheric region, and these values would not be applicable to regions of different magnetic field strengths. For consistency, we will apply the same technique to our intensity spectral diagnostics with the aim of better understanding them. However, given the complexity of the intensity signal, these values might not accurately reflect their sampled formation height differences. We measure the estimated separation in formation height (Δ z) using the observed phase differences (Δ ϕ) present in the propagating acoustic wave regime (Region C) within the range 6-9 mHz and 1-3 Mm^-1. This range was selected to encompass the majority of the significant positive phase differences present within Region C that have relatively large coherence values. The estimated values seen in Table <ref> are calculated using Δ z = v_ p,zΔ ϕ/2 πν, where v_ p,z is the vertical phase speed and ν is the cyclic frequency. The vertical phase speed is v_ p,z = ω/k_z≈c_ s/√(1 - ω_ ac^2/ω^2). When the term involving the horizontal wavenumber is small in Eqn. <ref>, we get the approximate expression in Eqn. <ref>, which shows that the vertical phase speed can be much larger than the sound speed. Over the region in which we measure the phase differences, the vertical phase speed averages about 11.6 km s^-1. We assume an acoustic cutoff frequency of 5.4 mHz, a Brunt-Väisälä frequency of 4.9 mHz, and a photospheric sound speed of 7.0 km s^-1. We do not measure the separation in formation height using phase differences present within the AGW domain (Region A) as it is not immediately clear how to do so. There is greater uncertainty and more unknown variables (such as magnetic field topology and radiative damping) to take into account than there are for propagating acoustic waves. As the HMI Fe1 6173 line and ultraviolet AIA continuum channels are well-studied spectral diagnostics, we can use them as references to constrain the estimated formation heights of our sampled IBIS fluctuations. Based on Table <ref>, the HMI Fe1 6173 represents the lowest photospheric velocity diagnostic while Fe1 5434 represents the highest photospheric velocity diagnostic. From our analysis, we find that Fe1 7090 forms in the photosphere slightly above HMI while K1 7699 forms closer to Fe1 5434. In terms of the observed line minimum intensity fluctuations, Fe1 5434 appears to sample a higher atmospheric region than the two AIA channels by about 16-125 km. In addition, the K1 7699 line minimum intensity appears to sample a lower atmospheric height than its velocity counterpart. Table <ref> shows that between the Fe1 5434 - K1 7699 pair, Δ z ≃ 95 km between their velocity signals and Δ z ≃ 228 km between their intensity signals. These Δ z values represent only an approximation and might not necessarily align with the differences between the established literature values. While we adopt commonly used values for the photosphere, variations in the acoustic cutoff frequency <cit.> and the photospheric sound speed exist throughout the solar atmosphere which would alter Δ z. By using synthetically generated velocity spectral maps of the magnetically insensitive Fe1 5434 and Fe1 5576 lines, <cit.> show that the AGW signature is only reliable up to a height separation of 400 km in the photosphere and about 100-200 km in the chromosphere. In other words, they find that the oblique propagation of AGWs implies that small height separations between diagnostics are necessary to obtain large coherence values. As our observables fall within this expected height difference range, we expect them to show high coherence levels. We find relatively high coherence for the V - V and I - I phase difference spectra but not for the I - V phase difference spectra which is not well understood. The I - V coherence spectra show anomalous behavior that warrants further analysis. § DISCUSSION §.§ Noteworthy Features In addition to propagating AGWs, phase analysis of our multi-height observations demonstrates several noteworthy features present: (1) a varying distribution of observed negative phase differences within Region A; (2) unexpected negative phase differences present in Region B; (3) no height dependence for the phase differences in Region A; (4) distinct differences among the V - V and I - I phase difference spectra; and (5) significant phase differences in Region F and pseudo p-mode ridges in Region C. These observations also highlight the need to disentangle how to interpret the intensity diagnostics. The IBIS V - V phase difference spectra (Fig. <ref>) showcase a strong variation in the distribution of negative phase differences present within Region A, which are associated with propagating AGWs carrying energy upwards. To the best of our knowledge, we have not seen such varied phase difference distributions as that seen in Fe1 5434 - K1 7699 (Δ z ≃ 95 km) and K1 7699 - Fe1 7090 (Δ z ≃ 168 km) previously documented. Visually, these combinations appear to be mirror images of one another. For these combinations, the propagating AGW signal is confined along a slope of some spatial scale width for all frequencies. Positive phase differences are also present in Region A, which might be an indicator of reflected AGWs carrying energy downward. Given how infrequently studied AGWs are in addition to the complexity of their modeled wave behavior, this varying distribution warrants further attention. The aforementioned combinations also show comparable negative phase differences to AGWs in Region B, which is typically uncharacteristic of evanescent waves. This feature seems to be evident in a couple of IBIS - HMI combinations as well as several I - I combinations, mainly involving the upper photospheric IBIS diagnostics. In contrast, Region B shows positive phase differences in all of our I - V phase difference spectra, the opposite phase relationship to the neighboring wave regimes, albeit with a coherence of nearly 0. This distinctive region has been previously reported in I - V phase difference spectra <cit.>. This feature is present in simulated V - V phase difference spectra with different magnetic field strength configurations by <cit.> and <cit.> and in a couple of other observed V - V phase difference spectra by <cit.> and <cit.>. The presence of this feature seems to not be related to the duration of the observed time series, not exclusive to the Fe1 7090 line core velocity signal, not related to magnetic field strength, and not associated with a specific height separation between the sampled velocity signals. While evanescent waves present within Region E are not expected to propagate vertically (Δ ϕ≃ 0), dissipative mechanisms such as radiative damping can influence wave dynamics <cit.>. Souffrin's acoustic-gravity wave theory <cit.> notes that when radiative damping in the solar atmosphere is taken into account, the acoustic-gravity wave spectrum is altered from the rigid boundaries that are drawn in our figures <cit.>. When accounting for radiative damping, waves can be broadly split into two categories: mainly progressive or mainly damped waves (which includes both AGWs and evanescent waves). For evanescent waves, the inclusion of radiative damping means that these waves are no longer purely stationary <cit.>. In their analysis of I - V phase difference spectra, <cit.> suggested that these distinctive phase differences are indicative of downward propagating evanescent waves that are produced by the scattering of resonant p-modes, which <cit.> confirmed after analyzing temperature gradient and opacity changes within the atmosphere. Additionally, we find no obvious relationship between phase differences and separation height in Region A as that seen in Region C. While the positive phase differences within Region C increase with increasing height separation between the diagnostics which is expected for acoustic waves, Region A does not show a similar trend. While there seems to be a correlation between increasing separation height and phase differences present in the IBIS intensity diagnostics, this breaks down in the AIA - IBIS combinations. Such a trend might not be apparent due to radiative damping effects, the magnetic field strength, or even how we choose to study AGWs. Phase difference spectra mainly sample the vertical phase differences. As AGWs are known to have large horizontal velocities <cit.>, this might not show up in our analysis. Future work is necessary to understand the observational effects of radiative damping on AGWs. We detect differences in the distribution and magnitude of phase differences present between our I - I and V - V phase difference spectra. On average, we find larger negative phase differences as well as a more spread out distribution present in Region A in our I - I diagnostic combinations. Only in AIA - AIA and K1 7699 - Fe1 7090 do we find smaller phase differences. Additionally, all AIA - K1 7699 phase difference spectra show a restricted distribution of negative phase differences below 2 mHz in Region A, which is not evident in any other diagnostic combinations. Propagating AGWs are believed to leave observable intensity signatures, which might theoretically show up as negative phase differences in I - I phase difference spectra. <cit.> demonstrated that AGWs can have significantly larger temperature amplitudes which increase steeply with height towards their wave breaking heights in the chromosphere, where they dissipate into small-scale turbulence. It might be that these phase difference spectra are indicative of propagating AGWs that perturb atmospheric regions that either intensify or filter their intensity signal. However, there is uncertainty regarding the intensity as just a proxy for temperature due to the fact that it is heavily influenced by opacity, temperature, and density fluctuations. The varying distribution seen in the AIA - K1 7699 combinations might be partly attributed to the radiative transfer properties of the spectral diagnostics. The AIA 1600 channel is dominated by continuum emission and C4 while the AIA 1700 channel only contains continuum emission <cit.>.<cit.> provides a discussion on how the atmospheric parameters encoded within the K1 7699 line core are sensitive to different layers of the atmosphere. The line minimum intensity seems to sample a lower photospheric height around where inverse granulation happens than its line core Doppler velocity which appears to be sensitive to the velocity fields in the upper photosphere. We can see similarities between the line core velocity maps of K1 7699 and Fe1 5434 while the line minimum intensity maps of K1 7699 and Fe1 7090 look similar. The measured separation heights noted in Table <ref> between K1 7699 and Fe1 7090 also affirms this as Δ z ≃ 168 km between velocity fluctuations and Δ z ≃ 76 km between intensity fluctuations. However, the discrepancy seen within Region A between the AIA - Fe1 7090 and AIA - K1 7699 combinations indicates some underlying radiative transfer properties or wave propagation effects that are not fully understood. The intensity phase difference spectra also feature significantly large phase differences in Region F and pseudo p-mode ridges in Region C with high coherence values. We note that this behavior is not detected in the K1 7699 - Fe1 7090 combination nor is it an effect of the truncation of the color scale. Within this particular line combination, we detect negative phase differences below Region F. The behavior present in Region F associated with the f-mode is not fully understood. While radiative damping might be responsible for the positive phase differences seen in Region E in intensity <cit.>, the f-mode is an incompressible wave that only propagates horizontally and should not be affected by radiative damping <cit.>. <cit.> do not observe any phase shifts for the f-mode in the phase difference spectra between the lower chromospheric-photospheric broadband pair G-band and Ca2 H. On the other hand, <cit.> find a strong signature associated with the f-mode's vertical energy flux. The pseudo p-mode ridges at high frequencies have been previously detected in intensity phase difference spectra <cit.>. It is widely believed that the presence and location of this feature is the result of source resonance caused by the interference of upward and downward propagating acoustic waves and the correlated background noise which makes them more prominent in intensity <cit.>. Additional work needs to be done to understand not only the complexity of the acoustic-gravity wave spectrum but also the intensity signal itself. This highlights the importance of conducting more multi-height observational studies. §.§ Comparison with other AGW Observations, Simulations, and Theory We report that the heights, spatial scales, and temporal frequencies at which we detect propagating AGWs are consistent with simulations, theory, and previous observations. Direct comparisons to simulations and previous observations may be difficult given the wide range of spectral lines used (different spectral properties), the different methods used to measure velocity fluctuations (which can influence the height the fluctuations sample), and the way the AGW signature is presented visually (truncation of the color scale or color map used); nonetheless, we can discuss the overall similarities present. We begin comparisons by focusing on the V - V phase difference spectra as it is the diagnostic most commonly employed to study AGWs. The phase differences present in Region A (Δ ϕ ≃ -20) are in line with most previous observations. We know of two studies where larger negative phase differences have been detected: the Fe1 5434 - Fe1 5576 phase difference spectrum in <cit.> (t = 29.4 min; Δ z = 190 km) shows values of -40 at 2.5 mHz, and the Mg b 2 - Ni1 6768 spectrum in <cit.> (t = 12 hr; Δ z = 600 km) shows values greater than -100. Smaller phase differences (Δ ϕ ≃ -10) with large coherence have also been identified in a 45 s cadence HMI dataset by <cit.> (t = 6.4 hr). <cit.> generated multi-height velocity diagnostics using HMI's Fe1 6173 line to create the HMI-algorithm derived Dopplergram (z ≃ 100 km), line core Dopplergram (z ≃ 150 km), and average-wing Dopplergram (z ≃ 80 km). The identification of propagating AGWs in the lower photosphere using HMI allows us to confidently assume that is what we see in Region A of our IBIS - HMI combinations. Numerical simulations by <cit.> and <cit.> indicate that the magnetic field modifies the behavior of AGWs in the upper photosphere while AGWs generated in the lower photosphere should not be affected. Using the HMI line of sight magnetogram, we measure an unsigned magnetic field RMS value of 24.6 G and median value of 4.2 G. The magnetogram also indicates magnetically concentrated areas between -738 G to 687 G. Our disk center observations are consistent with the 0 G and 10 G vertical magnetic field models studied by <cit.>. When comparing the phase difference spectra to similar heights within <cit.>, we see slight similarities in the phase difference distributions present in Region A, in particular for the upper photospheric pair. While not shown in the paper, we checked for the effects of the lower photospheric quiet Sun magnetic field using the HMI line of sight magnetogram. We computed the phase difference spectra for pixels above and below the median magnetic field value. We did not detect a change in the overall phase difference distribution of Region A using this method for any line combinations. From this, we infer that at quiet Sun disk center, the lower photospheric magnetic field does not significantly affect the propagation of AGWs, which is in line with that reported in <cit.>. However, we still cannot account for the wave behavior seen in some of our figures and lack upper photospheric magnetic field information. Intensity signatures of AGWs have not been studied in as great detail as their velocity signatures; however, previous analysis of intensity oscillations has suggested their presence within the solar atmosphere. The presence of AGWs and interference with the intensity granulation pattern at mid-photospheric heights have been previously explored <cit.>. Studies by <cit.> (t = 11.87 hr) and <cit.> (t = 44 min) show the AGW signature at low frequencies with phase differences greater than -80 between the broadband G-Band and Ca2 H observations sampled using different ground and space based instruments. This tells us that the larger negative phase differences detected in our IBIS intensity combinations are not unusual; however, our values do not reach the phase differences reported in these studies. When comparing our AIA - AIA phase difference spectrum to that seen in the 22 s cadence ultraviolet upper photospheric and lower chromospheric 1600 and 1700 time series observed with the Transition Region and Coronal Explorer (t = 3.7 hr) shown in <cit.>, which should roughly sample the same heights (and therefore features), we observe roughly similar phase differences. This indicates that even interpolating the cadence to match the faster cadence of IBIS produces similar results. The negative phase differences present at the low temporal frequencies and horizontal wavenumbers within Region A in our I - V phase difference spectra have been previously attributed to AGWs <cit.>. In the mid-photosphere around 200-300 km, <cit.> (t = 4 hr) found phase differences around -30 corresponding to the spatial and temporal characteristics associated with AGWs. Observations by <cit.> looking at a 64 s cadence time series of Ca2 8542 and K1 7699 (t = 8 hr) found phase differences in Region A near -90 in the upper photosphere and up to -180 in the lower chromosphere. The theoretical framework for understanding phase relations between intensity and velocity for the acoustic-gravity wave spectrum with and without radiative damping was explored in works by <cit.>, <cit.>, and <cit.>. <cit.> showed that AGWs behave similarly to evanescent waves, where temperature and velocity perturbations are out of phase. For adiabatic waves, this phase difference will be -90. Radiative damping can increase the phase differences, resulting in waves at select frequencies and horizontal wavenumbers reaching values between -90 and -180. We find that on average, our phase difference spectra display the expected theoretical out of phase relationship for AGWs. However, we detect smaller than anticipated values for the different phase regimes even taking into account non-adiabatic propagation. We also do not see the linear relationship with height in Region A reported in <cit.>. In fact, it appears that the phase lag between intensity and velocity for K1 7699 shows smaller phase differences for AGWs than Fe1 7090. This indicates the complexity in interpreting the intensity signature. While our observations demonstrate qualitative agreement with previous observations, there are disagreements present when compared to the expected theoretical behavior of AGWs. Even when accounting for radiative damping using Souffrin's acoustic-gravity wave theory, we do not see similar results within Region A between our data and the expected theoretical phase difference spectrum seen in Fig. <ref> besides the same sign in phase differences. The theoretical phase difference spectrum in Fig. <ref> depicts a gradient with a saturated distribution of negative phase differences at small horizontal wavenumbers in Region A, which we clearly do not see in our observations even with increasing separation height between diagnostics. We also do not observationally detect such large negative phase differences in any spectral combinations. However, the theoretical modeling of the waves present in Region C lines up with what we expect to see. Our observational phase difference spectra show increasing positive phase differences with increasing separation height. Thus, these dissimilarities suggest that we need a new way to probe the behavior of AGWs in addition to the traditional k_ h-ν phase difference spectra. This is explored in Section <ref>. §.§ Time-distance analysis of AGWs The phase difference spectra seen earlier (e.g., Fig. <ref>) provide information regarding the vertically propagating features of AGWs, since the maps at each height are co-spatial. However, AGWs have significant horizontal motions, resulting in strong perturbations to the vertical velocity field that can be tracked in the horizontal directions. This motivates a study into these motions using various techniques commonly employed in local helioseismology <cit.>. For what follows, it is useful to review some of the theory of acoustic-gravity waves in the isothermal, non-adiabatic (and non-magnetic) case <cit.>. A height-independent radiative damping time τ is introduced, and the dispersion relation is modified from Eqn. <ref> to k_z^2 = ±1/2[a + √(a^2+b^2)], where, a = N^2-ω^2/ω^2k_x^2 + ω^2-ω_ ac^2/c^2 - 1/1+ω^2τ^2(N^2k_x^2/ω^2 - N^2ω^2/g^2), b = ωτ/1+ω^2τ^2(N^2k_x^2/ω^2 - N^2ω^2/g^2). In these expressions, N is the Brunt-Väisälä frequency N^2 = (γ - 1) g^2/c^2. To understand the observations below, we consider a simple atmosphere with adiabatic exponent γ=5/3, sound speed c=7 km s^-1, acoustic cutoff frequency ω_ ac=5.3 mHz, and gravitational acceleration g=274 m s^-2. This also yields N/2π≈ 5 mHz. The negative solution to Eqn. <ref> describes AGWs. Eqns. <ref>-<ref> reduce to the adiabatic solution as τ→∞. We are interested in an observational time-distance diagram that represents the horizontal propagation of AGWs with time. In local helioseismology, this quantity is normally computed from the inverse Fourier Transform of the velocity power spectrum. For multi-height Doppler observations, one could obtain this quantity by computing the inverse transform of the cross-spectrum, which <cit.> demonstrated. A 2-D slice through this quantity at a constant time produces a ring shape in the x and y direction. This ring structure captures the propagation of wave signals horizontally and provides an average of the signal between any two points a certain distance apart. We computed the time-distance diagram in this fashion, but it is a bit noisy. Instead, we compute the time-distance diagram by temporally cross-correlating a point taken from the Doppler map at the lower height (representing the lower forming diagnostic) with the average of a concentric annulus of points from the map at the upper height. After this computation for a given annulus radius and for all spatial pixels and averaging it, we repeat for a range of radii that fit within the boundaries dictated by our field of view. For each annulus radius, about 10,000 cross-correlations are averaged. This is a higher level of averaging than what is obtained in a standard time-distance diagram and results in a stronger signal. We isolate the low-frequency AGWs by applying a three-dimensional Gaussian filter in both frequency and wavenumber space to the data in Fourier space prior to the computations. To avoid contamination of the acoustic waves near the Lamb line (Region E) and the f-mode (Region F), the filter is tapered to zero well before it reaches these regions. The resulting filter is slightly non-Gaussian with a center-of-mass at ν=1.17 mHz and k_ h=2.28 Mm^-1. We show in the left panel of Fig. <ref> a time-distance diagram computed using this method for the Fe1 7090 and Fe1 5434 line pair. We calculated V - V and I - I cross-correlations between all combinations of IBIS diagnostics. Similar results are found in all cases. The estimated height separation between these two maps is only about 250 km (see Table <ref>); therefore, the horizontal distance is approximately the total travel distance. These results demonstrate a strong signal emanating from the low-frequency AGW packets propagating at a speed of about 4.5 km s^-1, which is much slower than the local sound speed of ∼ 7 km s^-1. After around the 8 Mm mark, the signal becomes quite noisy, likely due to damping. The right panel of Fig. <ref> shows the phase difference computed between the central point of the lower height (Fe1 7090) and the annulus at the upper height (Fe1 5434) for each radius. There is signal only within the frequency bandpass dictated by the filter. At the zero distance mark, which corresponds to purely vertical motion between the two layers, we replicate the phase differences (Δ ϕ ≃ -15) seen in Fig. <ref> for this line pair combination at these approximate (ν, k_ H) values, as expected. The figure shows several interesting features: (1) sign reversals of the phase differences at increasing horizontal travel distances with peak values of ± 35; (2) curvature of lines of constant phase difference at low frequency; and (3) a weakening of the signal at larger distances that is more rapid for the lower-frequency waves. The observed curvature of the lines at constant phase difference can be readily explained. At any given travel distance, the higher-frequency waves within the wave packet, which have larger phase and group speeds, reach the upper height quicker and will carry their phase difference earlier than the lower-frequency waves. However, the sign reversals and the values at which they occur are more challenging to interpret. To explore these features, we consider a very simple simulation of propagating AGWs comparable to that of <cit.>. We use a numerical 2D box set in the x-z plane extending about 25 Mm wide and a few Mm in height to propagate perturbations due to AGW packets. We prescribe the waves with positive random values for the frequency and horizontal wavenumber drawn from the distribution function of the Fourier filter that was used for our real data. We compute the (negative, downward) vertical wavenumber for each (k_ h,ν) pair by solving the non-adiabatic equations. Since we extract the wavefield at two heights that are separated by 250 km, which is approximately a scale height or slightly larger than a scale height, we use an atmospheric model that does not vary in height (no stratification). We tested a stratified atmosphere model in the simulation too but found no significant difference in the results between these two layers. The values for the acoustic cutoff frequency, photospheric sound speed, gravity, buoyancy frequency, and adiabatic exponent are fixed to the aforementioned values while we vary the radiative damping timescale for different runs. The wave packets are given theoretical group velocities v_ g = ∂ω/∂k and phase velocities v_ p = ωk/k^2 computed from the atmospheric parameters <cit.> and the wavenumber and frequency content of the considered waves. The resulting wavefield, sampled every 60 seconds, is a linear combination of about 100 AGWs injected into the bottom left of the domain. To compare with our IBIS observations, we follow the same procedure and cross-correlate the wavefield at the leftmost point at the lower height with the wavefield at each successive horizontal distance at the upper height. Phase differences are also computed as a function of horizontal travel distance. Since there is no added noise in the simulation, there is no need for any additional averaging. The results are shown in Fig. <ref>, which can be roughly compared to the corresponding observations in Fig. <ref>. One can observe faint evidence of all the individual waves that make up the wave packet in the simulation, which show up as straight lines at different slopes. The range of group velocities is indicated in the figure by the dashed cyan lines. However, this feature is not seen in the IBIS results, which may imply some filtering mechanism present in the Sun, since the largest group speeds are for the waves with the highest frequencies. Indeed, we discover that the maximum of the cross-correlation in the observational time-distance diagram clearly corresponds to the horizontal group speed of the wave packets, as expected. The simulated AGWs have group speeds ranging from about 2 km s^-1 to about 5 km s^-1, with only a few of those with the largest frequencies attaining the highest speeds that are seen in the IBIS data. The group speed increases with frequency and is only very weakly dependent on the atmospheric values used, suggesting that the signal in Fig. <ref> is dominated by the AGWs in the filter with the higher-frequency content. The low-frequency waves seem to be damped out of the cross-correlation. Finally, because we only inject waves at one location and do not consider downward propagating waves, we do not see the negative time branch signal in Fig. <ref> that is seen in Fig. <ref>. The simulated phase differences have some similarities with the IBIS observations. At zero horizontal separation, we find phase differences of comparable magnitude to the data in the frequencies of interest. This agreement is only present when the radiative damping time used in the model is set to around 60 seconds. In the adiabatic case, which corresponds to very large damping times, we find much larger phase differences that do not agree with the IBIS observations. These results make sense, since for the waves in the filter, ωτ≤ 1, and damping plays a significant role. This indicates the importance of studying and including damping effects when analyzing the propagation of AGWs. Radiative damping will reduce the vertical wavenumber and the vertical group velocity of the wave packets, and the phase differences are then propagated between heights more slowly. The sign changes at increasing distances are also present within the model. However, we find that the phase differences wrap at values of π instead of the smaller extrema seen in Fig. <ref>. Intuitively, the simulation results make sense as the phase differences should increase to their extrema as the waves propagate throughout the atmosphere. The cutoff of around ± 35^ in our data is perplexing and indicates some missing physics in the model. The damping time is fixed for both heights instead of increasing with height as is expected <cit.>; however, that would not affect the results seen here. We also cannot replicate the rapid diminishing of the phase differences with distance present at low frequencies. § CONCLUSIONS Phase analysis of our multi-height space and ground based data from IBIS and SDO provided a window into the complex dynamics present in this observed quiet Sun disk center region, where we find propagating AGWs carrying energy upwards throughout the lower solar atmosphere. Using Fourier spectral analysis to construct phase difference and magnitude-squared coherence spectra in addition to local helioseimology techniques, we investigated both the vertical and horizontal properties of AGWs. The 2.75 hour long high-resolution, multi-wavelength IBIS time series provides sufficient frequency resolution to resolve these long-period oscillations (roughly 4 to 16 min) while the addition of SDO data provided supplemental height information. To understand the future diagnostic potential of AGWs and what information could be retrieved from a phase difference spectrum, we need to observationally understand their wave behavior in both Doppler velocity and intensity diagnostics. We find (1) propagating AGWs for all spectral diagnostic pairs and different height separations comparable with previous observations, theories, and simulations; (2) the distribution and magnitude of phase differences for AGWs varies depending on the sampled diagnostic; (3) the horizontal propagation properties of AGWs can be inferred from time-distance and phase difference diagrams; and (4) multi-height observations highlight the complexity of the solar atmosphere. For all height combinations, we detect significant negative phase differences in the low temporal frequency domain associated with propagating AGWs carrying energy upward. As previously detected, the signature of propagating AGWs is visible up to the lower chromosphere sampled by AIA. On average, the observed negative phase differences at these spatial and temporal scales are consistent with theory, simulations, and prior observations. However, we find that that the behavior seen observationally disagrees with that in Souffrin's acoustic-gravity wave theory for AGWs. Even though observationally these waves appear to have enough momentum to overcome radiative damping in the lower solar atmosphere, our work shows that we need to better understand the role radiative damping plays in the behavior of AGWs and how we might understand it through intensity diagnostics. Our study also highlights the varied phase difference distributions and magnitudes present within the AGW domain for various V - V and I - I diagnostic combinations that have not been observed previously. In general, we find larger negative phase differences present between the I - I combinations albeit with smaller coherence than their velocity counterparts. Additionally, both the intensity and velocity diagnostics show similar overall wave behavior for AGWs: negative phase differences indicating upwards propagation. Distinctive phase difference distributions are visible in the V - V combinations of Fe1 5434 - K1 7699 and K1 7699 - Fe1 7090 and the I - I combinations between AIA - K1 7699. We also note significant positive phase differences at temporal frequencies and horizontal wavenumbers typically associated with AGWs in the V - V combination between K1 7699 - Fe1 7090, which might imply the reflection of propagating AGWs carrying energy upward back down to the lower photosphere. These line combinations also sample atmospheric regions where the behavior of AGWs are believed to be influenced by the magnetic field. While not shown in the paper, a quick analysis performed by masking pixels based on the median strength of the lower photospheric quiet Sun magnetic field defined using HMI's line of sight magnetogram showed that even for strong magnetic fields, we detect no significant changes to the overall propagation of AGWs in the velocity maps. These results are consistent with the wave behavior found in <cit.>'s weak field magnetic runs of 0 G and 10 G. We infer that at least near quiet Sun disk center, the lower photospheric magnetic field does not significantly affect the generation and propagation of AGWs. Additionally, our results enable us to comment on the sampled heights of the line core Doppler velocity and line minimum intensity fluctuations used in this study. Using a select frequency and horizontal wavenumber range in the propagating acoustic wave regime in Region C, we measured the separation in formation heights between diagnostics. We find that the Fe1 5434 line minimum intensity signal seems to sample higher atmospheric regions than the AIA passbands, and the K1 7699 line minimum intensity and line core velocity signals probe visibly different atmospheric heights. We remind the reader that our phase difference spectra mainly provide insight into the vertical perturbations induced by propagating AGWs. As AGWs tend to have a significantly larger horizontal component (thousands of km versus one to two hundred km of separation in the vertical direction for our spectral diagnostics), we are missing out on a detailed characterization of these modes by only studying their vertical motions. In order to analyze the strong horizontal signatures expected of AGWs, we compute time-distance and phase difference diagrams as a function of horizontal displacement, which in our case is the total travel distance for these waves. The time-distance diagram for the Fe1 5434 - Fe1 7090 velocity combination shows AGWs propagating with an approximate horizontal group speed of 4.5 km s^-1, which is in line with values reported in <cit.>. The corresponding phase difference plot shows frequency-dependent phase differences at various horizontal displacements. We can replicate the approximate vertical phase difference for this diagnostic pair for zero horizontal distance which corresponds to purely vertical motions. However, our simple simulation of about 100 AGWs only partially explains this observed behavior, and reasonable agreement is only seen if the radiative damping time is 60 s. Our simulated time-distance diagram shows ranges of horizontal group velocities not seen in our observations. This indicates that the signal in our data is mainly dominated by high-frequency AGWs and that the low-frequency AGWs are damped out of the picture. The complexity of the acoustic-gravity wave spectrum is clearly seen through our observations. In addition to the behavior of the AGWs, we find negative phase differences comparable to AGWs in Region B (strongly indicative of reflected evanescent waves), an out of phase relationship for Region B present in all of our I - V phase difference spectra, and significant phase differences present in Region F associated with the f-mode and prominent pseudo p-mode ridges in Region C in our intensity diagnostics. As demonstrated by this work, multi-line observations provide a wealth of diagnostic potential regarding oscillations and features present in the solar atmosphere, which can even have impacts on local helioseimology measurements <cit.>. This highlights the importance of including the measurement of multiple spectral lines spanning the atmosphere in upcoming synoptic networks, such as the next-generation GONG network <cit.>. We also look forward to future DKIST observations that will explore the dynamics of the solar atmosphere and potentially allow us to better understand the observed AGW behavior. Because IBIS was dismantled at the DST in 2019, there is a need for similar narrowband imaging spectroscopic instruments able to make resolved measurements of photospheric lines. In particular, once available, the Visible Tunable Filter <cit.>, may fill in this gap with its diffraction-limited imaging spectroscopy and spectropolarimetry. In this paper, we revisited the quiet Sun disk center to study the propagation of AGWs using both Doppler velocity and intensity diagnostics validating past results with current results. We hope that the inclusion of the intensity diagnostics alongside the velocity diagnostics motivates new 3D MHD simulations to better understand how to interpret the I - I and I - V phase difference spectra and the effects of radiative damping. A detailed observational characterization of the properties and propagation of AGWs needs a thorough investigation into their behavior in the presence of magnetic fields and horizontal propagation characteristics. Our main goal is to study AGWs at various viewing angles on the Sun for a greater understanding of their properties and behavior. In future papers, we plan to fill in the gap present in the knowledge of AGWs by studying their behavior when viewed obliquely near the solar limb in order to investigate their horizontal properties and in the vicinity of active regions to explore the effect of the magnetic field strength and orientation. We will use the analysis presented in this paper as a reference to better understand how these different environments influence the behavior of AGWs. Data in this publication were obtained with the Dunn Solar Telescope facility, which is operated by New Mexico State University with funding support from the National Science Foundation and the state of New Mexico. We would like to thank the DST staff for all of their help and taking these observations, in particular Doug Gilliam. The HMI and AIA data used in this publication are courtesy of NASA's SDO and the HMI and AIA science teams. O.V. and J.J. acknowledge support from NASA under grant 80NSSC18K0672. O.V. is also supported under NSF grant 1936336. Dunn(IBIS), SDO(HMI,AIA) CMasher <cit.> § ADDITIONAL PHASE DIFFERENCE SPECTRA aasjournal
http://arxiv.org/abs/2307.01281v1
20230703181223
A unified treatment of mean-field dynamo and angular-momentum transport in magnetorotational instability-driven turbulence
[ "Tushar Mondal", "Pallavi Bhat" ]
astro-ph.HE
[ "astro-ph.HE", "astro-ph.SR", "physics.plasm-ph" ]
tushar.mondal@icts.res.in International Centre for Theoretical Sciences, Tata Institute of Fundamental Research, Bangalore 560089, India pallavi.bhat@icts.res.in International Centre for Theoretical Sciences, Tata Institute of Fundamental Research, Bangalore 560089, India Magnetorotational instability (MRI)-driven turbulence and dynamo phenomena are analyzed using direct statistical simulations. Our approach begins by developing a unified mean-field model that combines the traditionally decoupled problems of the large-scale dynamo and angular-momentum transport in accretion disks. The model consists of a hierarchical set of equations, capturing up to the second-order cumulants, while a statistical closure approximation is employed to model the three-point correlators. We highlight the web of interactions that connect different components of stress tensors—Maxwell, Reynolds, and Faraday—through shear, rotation, correlators associated with mean fields, and nonlinear terms. We determine the dominant interactions crucial for the development and sustenance of MRI turbulence. Our general mean field model for the MRI-driven system allows for a self-consistent construction of the electromotive force, inclusive of inhomogeneities and anisotropies. Within the realm of large-scale magnetic field dynamo, we identify two key mechanisms—the rotation-shear-current effect and the rotation-shear-vorticity effect—that are responsible for generating the radial and vertical magnetic fields, respectively. We provide the explicit (nonperturbative) form of the transport coefficients associated with each of these dynamo effects. Notably, both of these mechanisms rely on the intrinsic presence of large-scale vorticity dynamo within MRI turbulence. A unified treatment of mean-field dynamo and angular-momentum transport in magnetorotational instability-driven turbulence Pallavi Bhat August 1, 2023 ========================================================================================================================== § INTRODUCTION The origin of angular momentum transport is a central problem in accretion disk theory. It is now widely accepted that magnetorotational instability (MRI) <cit.> is responsible for generating turbulent motions and facilitating the outward transport of angular momentum in accretion disks. For MRI to manifest, the disk must possess sufficient ionization levels to allow for effective coupling with magnetic field lines. In its original form, it appears as a linear instability in differentially rotating flows threaded by vertical magnetic fields. However, a purely toroidal field is also capable of initiating an instability <cit.>. The MRI continues to operate in the nonlinear regime, and eventually leads to a fully nonlinear, turbulent state <cit.>. In general, MRI-driven magnetohydrodynamic (MHD) turbulence requires sufficiently coherent magnetic fields for sustenance <cit.>. For a given initial magnetic field configuration, the MRI can be initiated locally, but it has the opportunity to dissipate the large-scale fields via the generated turbulence, which then affects the ability of MRI to further sustain the turbulence. To perpetuate the turbulent motions, one needs to regenerate and sustain large-scale magnetic fields against dissipation through a dynamo mechanism <cit.>. Since the discovery of MRI, numerous direct numerical simulations (DNS), both local <cit.> and global <cit.>, have confirmed the sustenance of MRI turbulence along with the coexistence of large-scale magnetic fields. These studies have demonstrated that turbulent angular momentum transport is primarily driven by the correlated magnetic fluctuations (Maxwell stress) rather than their kinetic counterpart (Reynolds stress). Various physically motivated models have been developed to describe the mechanism of angular momentum transport in accretion disks. Notably, Kato and Yoshizawa <cit.> and Ogilvie <cit.> derived a set of closed dynamical equations describing the evolution of the mean Reynolds and Maxwell stress tensors in a rotating shear flow, assuming the absence of any mean magnetic fields. For the MRI to be operative, Pessah, Chan and Psaltis <cit.> developed a local model for the dynamical evolution of the Reynolds and Maxwell tensors in a differentially rotating flow, threaded by a mean vertical magnetic field. All of these models successfully capture the initial exponential growth and subsequent saturation of the Reynolds and Maxwell stresses. However, an important limitation of these models is the assumption that the Faraday tensor, denoted as F̅_ij≡⟨ u_i b_j ⟩, vanishes, thereby resulting in the absence of mean magnetic field generation. Here, u and b represent the fluctuating components of velocity and magnetic field, respectively. While these studies represent valuable foundations for understanding the MRI mechanism, particularly the angular momentum transport phenomena, they fall short of capturing the practical aspects of MRI-driven turbulence, primarily due to the neglect of large-scale dynamos. In principle, large-scale magnetic fields are expected to emerge through the stretching and twisting of field lines by small-scale turbulence. A commonly adopted framework to study large-scale dynamos is mean-field electrodynamics <cit.>. Within this framework, the evolution of mean magnetic fields is described in terms of transport coefficients derived from statistically averaged properties of small-scale velocity and magnetic fields. A prominent mechanism responsible for the amplification of large-scale magnetic fields is known as the α-effect <cit.>, where the small-scale turbulence generates an electromotive force (EMF, represented by ℰ̅ ) that is directly proportional to large-scale magnetic fields, ℰ̅_i = α_ijB̅_j. For the α-effect to operate effectively, the turbulence must break statistical symmetry in some way, either through the presence of a net helicity or through stratification and rotation. An important part of the dynamo mechanism is the Ω-effect, which arises from the presence of large-scale velocity shear commonly found in astrophysical systems under the influence of gravitational forces. In the Ω-effect, shear stretches the mean magnetic fields, facilitating their amplification and evolution. Specifically, the Ω-effect converts a radial field component into a toroidal one. However, one of the most challenging aspects of mean-field theory is to close the dynamo cycle through the sustained generation of poloidal fields (both radial and vertical components) by mechanisms that require a detailed understanding. In this context, the traditional α-effect has demonstrated great success in flows that lack reflectional symmetry, and its existence has been well established through numerical simulations of helically forced turbulence. While stratified shearing-box simulations of MRI-driven turbulence also show some support for an α-Ω dynamo, it is possible that a different mechanism is more fundamental to the evolution of large-scale magnetic fields in accretion disks <cit.>. In fact, studies conducted on unstratified, zero-net flux simulations of MRI turbulence have revealed the presence of large-scale dynamo action in the absence of an α-effect <cit.>. Moreover, in different contexts, it has been demonstrated that the combined effects of shear and turbulent rotating convection can give rise to large-scale dynamo action, where the driving mechanism is different from the classical α-effect <cit.>. In the context of the shear dynamo, Yousef et al. <cit.> showed that forced small-scale nonhelical turbulence in non-rotating linear shear flows can lead to the exponential growth of large-scale magnetic fields. These findings highlight the importance of considering alternative dynamo mechanisms beyond the traditional α-effect in systems characterized by shear and turbulence, providing insights into the diverse range of processes contributing to the generation and amplification of large-scale magnetic fields. The underlying process of dynamo generation is multifaceted, as several potential mechanisms have been proposed to explain the generation of large-scale fields without a net α-effect. One such possibility is the `stochastic α-effect' in turbulent flows, where the mean α coefficients are zero. In this approach, sufficiently strong fluctuations of α in interaction with shear can lead to the growth of mean magnetic fields <cit.>. Another explanation lies in the `shear-current effect' <cit.> and the `magnetic shear-current effect' <cit.>, emerging from the off-diagonal turbulent resistivity in the presence of large-scale velocity shear. In the case of the magnetic shear-current effect, magnetic fluctuations arising from small-scale dynamo action can generate large-scale magnetic fields. A third possibility is the `cross-helicity effect' <cit.>. This mechanism involves augmenting the induction equation for the mean magnetic field with an inhomogeneous term proportional to the product of cross-helicity and mean vorticity. The interplay of cross-helicity and vorticity provides an additional avenue for the generation and evolution of large-scale magnetic fields. It has to be noted that stochastic α-effect and the original shear-current effect are kinematic in nature and the MRI-driven dynamo is expected to be intrinsically nonlinear. But also in all of the above mentioned mechanisms, the role of rotation has not been considered actively. In differentially rotating turbulent flows, an EMF proportional to Ω× (∇× B) can drive dynamo action, referred to as the Ω× J or Rädler-effect <cit.>. In theoretical investigations of the MRI-driven system, it was found that turbulence is not required for large-scale dynamo action <cit.> and the non-normality in the system allows for self-coupling of non-axisymmetric modes <cit.> leading to a nontrivial electromotive force on a quasi-linear analysis <cit.>. More recently, a calculation of the triple correlation term in the small scale magnetic helicity equation has indicated the possibility of helicity fluxes leading to localization of helicity in space (rather than spectrally) and large-scale vorticity features prominently in the arising new helicity flux <cit.>. Thus far, the mean-field dynamo and angular momentum transport problems have been approached independently in a decoupled manner. The mean-field dynamo theory has traditionally disregarded the transport dynamics, while angular momentum transport theory has overlooked the evolution of large-scale magnetic fields <cit.>. However, the large-scale behavior of velocity and magnetic fields depends on the interaction at smaller scales. Moreover, it is crucial to account for the back reaction of the large-scale dynamics on the small-scale environments. This inherent complexity renders both the problems highly nonlinear and poses a substantial challenge in formulating a comprehensive coupled theory for MRI-driven MHD turbulence. Another route to investigating the dynamo problem in the MRI-driven system (in both stratified and unstratified domains) is via the measurement of turbulent transport co-efficients in the mean field theory <cit.>. However, many of these studies use mean-field models based on homogeneous and isotropic small-scale turbulence, which is not justified for an MRI driven system. Others that use a more general mean field model, employ methods for inversion which are either unsuitable for nonlinear systems or set some of the coefficients to zero which is somewhat questionable or have to deal with complexity of correlated noise, nonlocality, degeneracies, overconstraining, etc. By using direct statistical simulations with a general model, we have been able to overcome most of these issues, and are able to directly determine the terms and transport coefficients (and their exact expressions) central to MRI dynamo action. In this paper, we construct a unified mean-field model that combines dynamo and transport phenomena self-consistently, and perform direct statistical simulations (DSS) in a zero net-flux unstratified shearing box. Our methodology begins by developing a mean-field model that consists of a hierarchical set of equations, capturing up to the second-order cumulants. To close these hierarchical equations, we express the third-order cumulants in terms of second-order cumulants using the CE2.5 statistical closure model, which lies between the second-order (CE2) and third-order cumulant expansion (CE3) methods <cit.>. We also apply the two-scale approach <cit.> to model second-order correlators involving the spatial gradient of a fluctuating field. The general nature of our model allows us to capture the effects of inhomogenieties and anisotropies in the system. Further it has been useful to determine directly the dominant terms/effects responsible for dynamo and transport in tandem. This path allows us to explore the possibilities of developing sub-grid models which can be used in global simulations and possibly also in general relativistic MHD simulations of accretion disks aimed towards understanding data from Event Horizon Telescope <cit.>. In order to numerically solve this model consisting of coupled equations for the mean field and stress tensors, we develop a special module within the Pencil-Code <cit.> framework. Our unified mean-field model addresses two key challenges: (a) disentangling the diverse physical processes involved in sustaining MRI turbulence and dynamo activity in accretion disks, and (b) identifying the dominant mechanisms responsible for generating large-scale magnetic fields. We present a comprehensive framework that elucidates the intricate network of interactions connecting different mean fields and stress components through shear, rotation, correlators associated with mean fields, and nonlinear terms. By studying the induction equation, we investigate the role of different EMFs in the evolution of large-scale magnetic fields. Our DSS results demonstrate good agreement with those obtained from DNS <cit.>. Specifically, the radial EMF has a resistive effect, reducing the energy of the azimuthal field. The azimuthal EMF, ℰ̅_y, generates a radial field, which, in turn, drives the azimuthal field through the Ω-effect. Notably, ℰ̅_y is also responsible for generating the vertical magnetic field. Next, with our model, we construct the EMF for an MRI driven system. We find that the constructed EMF is a linear combination of not only the usual terms proportional to mean magnetic fields and the gradient of mean magnetic fields, but also the gradient of mean velocity fields, and a nonlinear term. Thus, our EMF is not just an ansatz but an expression that naturally arises out of our model. The proportionality coefficients depend on shear, rotation, and statistical correlators associated with fluctuating fields. In our search for large-scale magnetic field dynamo, we identify two crucial mechanisms—the “rotation-shear-current effect” and the “rotation-shear-vorticity effect”—that are responsible for generating the radial and vertical magnetic fields, respectively. Remarkably, both of these mechanisms rely on the presence of large-scale velocity dynamo in the self-sustaining MRI-driven turbulence. The paper is organized as follows. In Section <ref> and its subsections, we present our unified mean-field model for MRI-driven turbulence and dynamo. Section <ref> discusses the requirements of a high-order closure model and provides a statistical closure model to facilitate our analysis. The numerical simulation set-up is described in Section <ref>. In Section <ref> and its subsections, we present the results obtained from the simulations. Section <ref> focuses on phenomena associated with the outward transport of angular momentum and the interplay of various Maxwell and Reynolds stresses. The role of large-scale magnetic fields in turbulent transport is also highlighted. Section <ref> addresses the large-scale dynamo associated with magnetic fields. Different planar-averaged large-scale fields are distinguished, and the role of various terms in generating mean magnetic fields is analyzed using planar-averaged induction equations. The construction of the EMF for an MRI-driven system is discussed in Section <ref>, followed by a detailed analysis of the dynamo mechanisms responsible for generating radial and vertical large-scale magnetic fields in Sections <ref> and <ref>, respectively. In Section <ref>, we discuss our newly discovered dynamo mechanisms, namely the rotation-shear-current effect and the rotation-shear-vorticity effect, and provide a comprehensive discussion comparing them with existing dynamo mechanisms. Finally, the paper concludes with a summary in Section <ref>. § MODEL We adopt the local shearing box model to investigate MRI turbulence and dynamo in a three-dimensional, zero net-flux configuration, employing novel DSS methods. In order to simplify the analysis, we assume an isothermal, unstratified, and weakly compressible fluid. The DSS method has proven to be a valuable computational technique and has shown promising results. However, applying the DSS method to study self-sustaining MRI-driven turbulence presents a significant challenge compared to forced turbulence, primarily due to the complexity of turbulent flows. The presence of an unlimited number of statistical properties that cannot be directly calculated from first principles complicates the analysis. Furthermore, a closure model is necessary to handle the high-order nonlinear terms of the statistically-averaged equations. Our investigation begins with a statistical averaging approach applied to the standard MHD equations, which describe the mean flow and flow statistics in an accretion disk. First, we write down the standard MHD equations in a shearing background in the rotating frame, as given by 𝒟 A/𝒟 t = -SA_y x̂ + U× B - ημ_0 J , 𝒟 U/𝒟 t = - ( U ·∇ ) U -SU_x ŷ - 1/ρ∇ P + 1/ρ J × B - 2 Ω× U + 1/ρ∇· 2νρ𝐒 , 𝒟lnρ/𝒟 t = - ( U·∇) lnρ - ∇· U . Here, 𝒟/𝒟t ≡∂ / ∂ t - qΩ x ∂ / ∂ y includes the advective transport by a uniform shear flow, U^0 = -qΩ x ŷ. Ω= Ωẑ is the background rotational velocity. The constant q ≡ - dlnΩ / dln R; for a Keplerian disk q=3/2. The magnetic field B is related to the magnetic vector potential A by B = ∇× A, and J = ∇× B/ μ_0 is the current density, where μ_0 is the vacuum permeability. The other quantities have their usual meanings: U is the velocity, P the pressure, ρ the density, η the magnetic diffusivity, ν the microscopic viscosity, and 𝐒_ij=1/2( U_i,j+ U_j,i - 2/3δ_ij∇· U ) the rate of strain tensor. We use an isothermal equation of state P = ρ c_s^2, characterized by a constant sound speed, c_s. In the conventional mean-field theory, one solves the Reynolds averaged equations. We thus consider a Reynolds decomposition of the dynamical flow variables, expressing them as the sum of a mean component (denoted by over-bars) and a fluctuating component (represented by small letters): U = U̅ + u , A = A̅ + a, and so on. It satisfies the Reynolds averaging rules, i.e., a̅ = 0, A̅̅̅ = A̅. Here, we consider the ensemble averaging to derive the cumulant equations. For weakly compressible fluids, where the density remains approximately constant, the mean-field equations in ensemble averaging can be written as: 𝒟_t A̅_i = S̅_i^A + ϵ_ijkU̅_j B̅_k + ℰ̅_i - ηJ̅_i , 𝒟_t U̅_i = - U̅_j∂_j U̅_i + S̅_i^U - 2ϵ_ijkΩ_jU̅_k -1/ρ∂_i P̅ + 1/ρϵ_ijkJ̅_j B̅_k + 1/ρ∂_j(M̅_ij-R̅_ij) - 1/2ρ∂_i M̅ + ν∂_jjU̅_i , 𝒟lnρ/𝒟 t = - (U̅·∇) lnρ - ∇·U̅ . Here, S̅^A = (-S A̅_y,0,0), and S̅^U = (0,-S U̅_x, 0). ℰ̅_i = <u× b >_i = ϵ_ijkF̅_jk is the mean electromotive force. M_ij=b_i b_j / μ_0, R_ij=ρ u_i u_j, and F_ij=u_i b_j are the Maxwell, Reynolds, and Faraday tensors, respectively. The effect of turbulence on the mean-field evolution is captured through the mean stress tensors M̅_ij, R̅_ij, and F̅_ij. We require knowledge of the evolution of such stress tensors to close the mean-field equations (<ref>) and (<ref>). By subtracting the ensemble-averaged equation from the total equation, we derive the evolution equations for the fluctuating velocity and magnetic fields: 𝒟_t u_i = -U̅_j∂_j u_i -u_j∂_j U̅_i - 2ϵ_ijkΩ_j u_k + 1/μ_0 ρ(B̅_j∂_j b_i + b_j∂_jB̅_i) - 1/ρ∂_i Π' + 1/ρ∂_j(M_ij-R_ij-M̅_ij+R̅_ij) + ν∂_jj u_i , 𝒟_t b_i = B̅_j∂_j u_i + b_j∂_jU̅_i -U̅_j∂_j b_i - u_j∂_jB̅_i + ∂_j (F_ij-F_ji-F̅_ij+F̅_ji) +η∂_jj b_i . We combine different fluctuating equations and apply Reynolds average rule to construct the governing equations for the M̅_ij, R̅_ij, and F̅_ij. The resulting equations for the mean Maxwell, Reynolds, and Faraday tensors are, respectively, 𝒟_t M̅_ij + U̅_k∂_k M̅_ij - M̅_ik∂_kU̅_j - M̅_jk∂_kU̅_i + S̅_ij^M + 1/μ_0(F̅_ki∂_k B̅_j + F̅_kj∂_k B̅_i ) = 1/μ_0[ B̅_k ⟨ b_i∂_k u_j+b_j∂_k u_i ⟩ + 𝒯̅^M_ij + η⟨ b_i∂_kkb_j+b_j∂_kkb_i⟩] , 𝒟_t R̅_ij + U̅_k∂_k R̅_ij + R̅_ik∂_kU̅_j + R̅_jk∂_kU̅_i + S̅_ij^R + 2ϵ_jklΩ_kR̅_il + 2ϵ_iklΩ_kR̅_jl - 1/μ_0(F̅_ik∂_k B̅_j + F̅_jk∂_k B̅_i ) = - ⟨ u_i∂_j Π' + u_j∂_i Π' ⟩ +1/μ_0[ B̅_k ⟨ u_i∂_k b_j+u_j∂_k b_i ⟩] + 𝒯̅^R_ij + ρν⟨ u_i∂_kku_j+u_j∂_kku_i⟩ , 𝒟_t F̅_ij + U̅_k∂_k F̅_ij - F̅_ik∂_kU̅_j + F̅_kj∂_kU̅_i + 2ϵ_iklΩ_kF̅_lj + S̅_ij^F - 1/ρ( M̅_jk∂_kB̅_i-R̅_ik∂_kB̅_j) = -1/ρ⟨ b_j∂_iΠ'⟩ + B̅_k ⟨ u_i∂_k u_j ⟩ + B̅_k/μ_0 ρ⟨ b_j∂_k b_i ⟩ + 𝒯̅^F_ij + η⟨ u_i∂_kkb_j ⟩ + ν⟨ b_j∂_kku_i⟩. The left-hand side of these equations describes the linear dynamics of the respective stress tensors. The terms S̅_ij^M, S̅_ij^R, and S̅_ij^F represent how the Maxwell, Reynolds, and Faraday tensors are `stretched' by the gradients of the background shear flow, U^0, respectively. They are expressed as S̅_ij^M = - M̅_ik∂_kU̅^0_j - M̅_jk∂_kU̅^0_i, S̅_ij^R = R̅_ik∂_kU̅^0_j + R̅_jk∂_kU̅^0_i, and S̅_ij^F = - F̅_ik∂_kU̅^0_j + F̅_kj∂_kU̅^0_i. Note that different stress tensors interact with the background velocity gradient in distinct ways. The terms 𝒯̅^M_ij, 𝒯̅^R_ij, and 𝒯̅^F_ij represent the nonlinear three-point terms that appear in the evolution equations for the Maxwell, Reynolds, and Faraday tensors, respectively. Mathematically, they are given by 𝒯̅^M_ij = ⟨ b_i b_k ∂_k u_j + b_j b_k ∂_k u_i - u_k ∂_k M_ij⟩ , 𝒯̅^R_ij = ⟨ u_i b_k ∂_k b_j + u_j b_k ∂_k b_i - u_k ∂_k R_ij⟩ , 𝒯̅^F_ij = ⟨ u_i b_k ∂_k u_j + b_j b_k ∂_k b_i - u_k ∂_k F_ij⟩ . The right-hand side of the stress equations (<ref>–<ref>) poses significant challenges due to the presence of four distinct types of terms. These terms include (a) the triple correlation of fluctuating quantities, (b) second-order correlations involving the spatial gradient of a fluctuating field, (c) pressure-strain correlators in the evaluation equations for R̅_ij and F̅_ij, and (d) terms associated with the microscopic diffusion process. It is essential to develop closure models for each of these challenging terms to make progress in our analysis. §.§ The Closure Model In the framework of a cumulant hierarchy, the expansion of the MHD equations (<ref>–<ref>) results in an infinite set of coupled partial differential equations. Due to the quadratic nonlinearities present in the standard MHD equations, the first-order cumulant equations for the coherent components (Eqs. <ref>–<ref>) involve terms that are second-order, such as the Maxwell, Reynolds, and Faraday tensors. Similarly, the second-order cumulant equations (Eqs. <ref>–<ref>) contain terms up to third order, and so on. Therefore, in order to make progress in the analysis, it is necessary to select an appropriate statistical closure that truncates the cumulant expansion at the lowest feasible order. Among the well-studied formalisms in DSS, the truncation of the cumulant hierarchy at second order (CE2) stands out as a simple yet effective approach <cit.>. In CE2, all statistics of order greater than two are zero. This truncation scheme selectively preserves the mean-eddy interactions in the eddy (or fluctuation) equations and the eddy-eddy interactions in the mean equations, while disregarding the eddy-eddy interactions in the eddy equations. Consequently, CE2 is considered to be weakly nonlinear or quasilinear. From a theoretical perspective, CE2 can be interpreted as the exact solution of a linear model driven by stochastic forces. This method has been successfully applied to study MRI turbulence and dynamo in the zero net-flux unstratified shearing box <cit.>. In this approach, the mean fields are assumed to depend solely on the vertical coordinate, thereby simplifying the system representation. The nonlinearity that is neglected in CE2 is approximated by incorporating white-in-time driving noise, allowing for the exploration of essential aspects of MRI turbulence and dynamo effects. The third-order cumulant expansion (CE3) includes the eddy-eddy interactions in the eddy equations. However, extending the analysis to third order and beyond presents technical challenges in deriving and solving the DSS system as it involves numerous interactions. To address this complexity, a simplified model called the CE2.5 approximation has been proposed as a practical alternative. The CE2.5 approximation makes several key assumptions to simplify the analysis. First, it sets all time derivatives for the third cumulants to zero, assuming that the third cumulant evolves more rapidly compared to the first and second cumulants. Second, the CE2.5 approximation neglects all terms in the equations for the third cumulant that involve the first-order cumulants. Finally, the fourth-order cumulants are replaced by an eddy-damping parameter or a diffusion process. In our statistical closure model for the three-point interactions, we employ an approach inspired by the CE2.5 approximation. The nonlinear three-point terms (Eqs. <ref>–<ref>) can be expressed as (see Appendix <ref>): 𝒯̅^M_ij = 1/L[ 2c_1 √(M̅)R̅_ij - 2c_2 √(M̅)M̅_ij - 2c_3 √(R̅)M̅_ij - c_4 √(F̅)( F̅_ij+F̅_ji)] , 𝒯̅^R_ij = 1/L[ 2c_2 √(M̅)M̅_ij - 2c_1 √(M̅)R̅_ij - 2c_5 √(R̅)R̅_ij - c_4 √(F̅)( F̅_ij+ F̅_ji) - c_6 √(R̅)(R̅_ij - 1/3R̅δ_ij) ] , 𝒯̅^F_ij = 1/L[ 2c_7 √(M̅)F̅_ji - 2c_8 √(M̅)F̅_ij - 2c_9 √(R̅)F̅_ij - c_10√(F̅)( M̅_ij+R̅_ij) - c_11/2√(F̅)(F̅_ij +F̅_ji - 2/3F̅δ_ij) ] , where, c_1, …, c_11 are positive dimensionless constants of the order of unity, and L represents a vertical characteristic length (such as the disk thickness or the height of the simulation box). Throughout our computations, we have set c_1 = ⋯ = c_11 =1. The quantities M̅ and R̅ denote the traces of the Maxwell and Reynolds tensors, respectively, while F̅ = (F̅_xx^2 + F̅_yy^2 + F̅_zz^2)^1/2. It is important to highlight that the last term in the 𝒯̅^R_ij equation (Eq. <ref>), involving the constant c_6, and the last term in the 𝒯̅^F_ij equation (Eq. <ref>), involving the constant c_11, correspond to the isotropization terms arising from the pressure-strain nonlinearity <cit.>. The pressure-strain correlation is a third-order quantity, appearing in eq:Rij_exacteq:Fij_exact, and necessitates a non-deductive closure. Henceforth, we incorporate them into the three-point correlators, 𝒯̅_ij. Furthermore, various other terms arising from the three-point correlators have significant implications. The terms c_3, c_5, and c_9 correspond to the turbulent dissipation of the Maxwell, Reynolds, and Faraday tensors, respectively. The terms c_1 and c_2 represent the interaction between the Maxwell and Reynolds stresses. In terms of energy transfer, the net rate of transfer from turbulent kinetic energy to magnetic energy can be expressed as L^-1√(M̅) (c_1 R̅ - c_2 M̅). The sign of this expression determines whether the kinetic or magnetic energy dominates the energy transfer process. Similarly, the terms c_7 and c_8 describe the interaction between the Faraday tensor and its transpose. Finally, the Faraday tensor interacts with the Maxwell and Reynolds stresses via c_4 and c_10 terms. In addition to the three-point correlators, the right-hand side of the stress equations (Eqs. <ref>–<ref>) contains terms that are directly proportional to B̅. These proportionality coefficients correspond to second-order correlators involving the spatial gradient of a fluctuating field. To proceed with our analysis, it is necessary to close these terms. To accomplish this, we employ a two-scale approach <cit.> to determine the second-order correlators associated with the spatial gradient. These correlators can be expressed as (see Appendix <ref>): B̅_m ⟨ u_i ∂_m b_j ⟩ = - Tr(B̅) l^-1F̅_ij + 1/2 (B̅·∇) F̅_ij, B̅_m ⟨ u_j ∂_m b_i ⟩ = - Tr(B̅) l^-1F̅_ji + 1/2 (B̅·∇) F̅_ji, B̅_m ⟨ b_i ∂_m u_j ⟩ = Tr(B̅) l^-1F̅_ji + 1/2 (B̅·∇) F̅_ji, B̅_m ⟨ b_j ∂_m u_i ⟩ = Tr(B̅) l^-1F̅_ij + 1/2 (B̅·∇) F̅_ij, B̅_m ⟨ u_i ∂_m u_j ⟩ = - Tr(B̅) l^-1R̅_ij + 1/2 (B̅·∇) R̅_ij, B̅_m ⟨ b_j ∂_m b_i ⟩ = Tr(B̅) l^-1M̅_ij + 1/2 (B̅·∇) M̅_ij. Note that eq: ui_dm_ujeq: bj_dm_bi hold for i ≠ j. Here, we introduce the inverse of length scale as l^-1 = s ( Ω / √(B̅^2 / μ_0 ρ)), where s is a constant. In our computations, we have set s = 0.25, as physical solutions are obtained for s < 0.3. However, further studies are required to determine a unique value of s based on DNS results. Similarly, we utilize the two-scale approach to determine terms associated with the microscopic diffusion process present in the stress equations (Eqs. <ref>–<ref>). §.§ Simulation Set-up We have developed a special module within the framework of the Pencil Code <cit.>, which is a high-order (sixth order in space and third order in time) finite-difference code, to numerically solve the model described by Eqs. (<ref>)–(<ref>), and (<ref>)–(<ref>). This model consists of a set of 28 coupled partial differential equations, involving variables such as A̅_i, U̅_i, ρ, M̅_ij, R̅_ij, and F̅_ij. The numerical simulations are performed on a Cartesian grid with dimensions N_x × N_y × N_z, and a size of L_x, L_y, and L_z along the three Cartesian directions. For our simulations, we have employed an aspect ratio of (L_x: L_y: L_z) = (L: L: L), with a resolution of 256^3. The boundary conditions are periodic in the azimuthal (y) and vertical (z) directions, while being shearing-periodic in the radial (x) direction. In the code, all quantities are expressed in dimensionless units, where length is scaled by L, velocity by the isothermal sound speed c_s, density by the initial value ρ_0, magnetic field by (μ_0 ρ_0 c_s^2)^1/2, and so on. For convenience, we have set the reference values as L = ρ_0 = c_s = μ_0 = 1. The mean velocity field is initialized with Gaussian random noise, with an amplitude of 10^-4. Similarly, the initial conditions for the stress tensors, namely the Maxwell stress, Reynolds stress, and Faraday stress, are also set as Gaussian random noise with an amplitude of 10^-4, except for the diagonal components of the Maxwell and Reynolds stresses. To preserve the positive definiteness of M̅_ii and R̅_ii, we initialize these stress components with positive random noise of amplitude 10^-4. The set-up we have adopted is similar to the one used in Ref. <cit.>. The initial magnetic field configuration is given by B̅ = B̅_0 sin (k_x x) ẑ, which can be written in terms of the vector potential as A̅ = A̅_0 cos (k_x x) ŷ, so that the magnitude of the magnetic field is related to the vector potential through |B̅_0| = k_x A̅_0, where k_x = 2π / L_x. We choose a rotation rate of Ω = 1 and A̅_0 = 0.005, resulting in k_max / k_1 = √(15/16) (Ω / U̅_A,0) / k_1 ≈ 5. Here, U̅_A,0 = B̅_0 / √(μ_0 ρ_0) represents the initial Alfven velocity, k_max corresponds to the wavenumber associated with the maximum growth rate predicted by linear MRI analysis, and k_1 = 2π / L is the wavenumber associated with the box size L. These choices ensure that the most unstable mode of the MRI, k_max, is well resolved by the numerical grid. Additionally, the initial conditions satisfy the condition for the onset of MRI, namely β > 1, where β = 2μ_0 P / B̅_0^2 is the ratio of thermal to magnetic pressure. In our case, β≃ 1014 for the maximum values of the initial magnetic field. With these parameters, the resulting steady-state turbulence driven by the MRI exhibits a characteristic root mean square velocity of U̅_rms∼ 0.1 c_s. Consequently, the Mach number remains of the order of 0.1, ensuring that compressibility effects are negligible. The fluid and magnetic Reynolds number are defined as Re≡U̅_rms L / ν and Rm≡U̅_rms L / η, respectively, where ν and η represent the microscopic viscosity and resistivity. In our study, we utilize values of ν = 3.2× 10^-4 and η = 8.0 × 10^-5, yielding a magnetic Reynolds number of Rm = 1250 and a magnetic Prandtl number of Pm≡Rm / Re = 4. § RESULTS We discuss the results from our fiducial statistical simulation of the local shearing box MRI system. We provide an exposition on the problems of turbulent transport and turbulent large-scale dynamo in different subsections below. In each subsection, we first provide the time evolution of the relevant quantities. Then we set out to investigate the sources and sinks involved in the evolution of the transport terms or the large-scale fields. We show how the terms in the statistical equations compare with each other allowing us to deduce the dominant effects. In this manner, we establish connections between the mean fields and the cumulants. In the fiducial simulation used to draw inferences from, the linear stage is up to t/T_orb∼ 5. The total length of the simulation is about t/T_orb∼ 150, which includes cyclic patterns in the evolution of the mean fields and cumulants. However, in this work, we do not address the cyclic behaviour as we focus on uncovering the main effects responsible for driving the MRI transport and dynamo. §.§ Turbulent Angular Momentum Transport We consider the problem of turbulent transport first to demonstrate that the results from our statistical simulations display the standard behaviors that agree with the theory or direct numerical simulations of MRI turbulence. In the latter part of this subsection, we present our findings related to the generation of the Reynolds and Maxwell stresses, previously unexplained. It is worth noting that existing local models that address the generation process of these stresses have either neglected mean magnetic fields <cit.> or considered only constant vertical magnetic fields <cit.>, thereby disregarding several significant interactions. Consider the evolution of volume-averaged components of the Maxwell (M̅_ij) and Reynolds (R̅_ij) tensors in the left and right panels of Fig. <ref>, respectively. The xy-components of the stress tensors are mainly responsible for the (radially) outward angular momentum transport. For the matter in accretion disks to accrete, i.e., to lose angular momentum, the sign of the mean total stress, W̅_xy = R̅_xy - M̅_xy, must be positive. This can be inferred straightforwardly from the radial component of the angular momentum flux, -∂_j (R̅_yj - M̅_yj), in eq:meanU with i=y and j=x. From the fig:ts_Mij_Rij, we see that the components of Maxwell and Reynolds stresses responsible for the outward angular momentum transport are always negative and positive, respectively, i.e., M̅_xy<0 and R̅_xy>0. This naturally leads to a net (radially) outward angular momentum flux mediated by total positive mean stress, W̅_xy = R̅_xy - M̅_xy >0. Furthermore, the dominant contribution to the total stress arises from the correlated magnetic fluctuations, rather than from their kinetic counterpart, i.e., -M̅_xy > R̅_xy, as expected. Note that the vertically outward angular momentum transport through W̅_yz = R̅_yz - M̅_yz is smaller. In Fig. <ref>, we also highlight the turbulent energy densities along three directions. The diagonal components (xx, yy, and zz) of the Maxwell and Reynolds stresses indicate the turbulent magnetic and kinetic energy densities with a multiplication factor of two, respectively. The total turbulent energy is (M̅ + R̅)/2, where M̅ = M̅_ii and R̅ = R̅_ii are the traces of the Maxwell and Reynolds tensors, respectively. As expected, the turbulent magnetic energy dominates over the kinetic counterpart. In the magnetic counterpart of the total energy, the azimuthal component is the most significant one followed by the radial and vertical contributions, i.e., M̅_yy > M̅_xx > M̅_zz. In the kinetic counterpart of the total energy, the radial component is the most dominant one followed by the azimuthal and vertical contributions, i.e., R̅_xx > R̅_yy > R̅_zz. All of these features are in agreement with exiting local models <cit.>. Thus, we are reassured that our statistical simulations are reliable to use for investigations related to turbulent transport. Next, we describe the generation mechanism of different components of stress tensors to understand the turbulent transport in more detail. Below we provide the comprehensive web by which the stress components connect to each other through (i) shear, (ii) rotation, (iii) mean fields, (iv) other small-scale correlators, and/or (v) nonlinear three-point terms. In Fig. <ref>, we plot the volume-averaged terms that appeared in the equations for Maxwell (eq:Mij_exact) and Reynolds (eq:Rij_exact) stresses. In next few paragraphs, we compare the amplitude and phase of the various terms in these equations to work out the chain of production leading to efficient turbulent transport. Those readers who are interested in the final summary immediately can skip to the last paragraph in this subsection and/or to a summary schematic in fig:mri_transport. To understand the process of outward angular momentum transport, we examine the time evolutions for the xx-, xy-, and yy-components of stress tensors. This is because the xy-components of stress tensors are directly connected to the xx- and yy-components of stress tensors via shear and/or rotation (more specifically, Coriolis force appears in the Reynolds stress equations). Since M̅_xx is positive throughout the evolution, the positive term in ∂_t M̅_xx acts as a source, whereas the negative term behaves like a sink. The same is true for all the stress tensors, which are positive throughout their evolution. For negative M̅_xy, the roles of different terms are opposite: the positive term in ∂_t M̅_xy acts as a sink, whereas the negative term behaves like a source. The upper panels of Fig. <ref> are for the Maxwell stress: (a) M̅_xx, (b) M̅_xy, and (c) M̅_yy. In the evolution of the latter two, M̅_xy and M̅_yy, the shear terms (solid blue line) act as the dominant source term, whereas the nonlinear three-point terms (dashed red line) act as the sink. Here, the “stretching" of the positive stress component M̅_xx via the shear produces M̅_xy at a rate of -qΩ. This renders M̅_xy negative. Similarly, shear acts on M̅_xy to produce the positive stress component M̅_yy at a rate of -2qΩ. There is no shear term in the time evolution of M̅_xx. Thus, it is evident that the turbulent transport via -M̅_xy can not work with Keplerian shear alone—M̅_xx component is needed for Keplerian shear to act on. The generation mechanism of M̅_xx is critically important here. For M̅_xx, the dominant source term is B̅_k ⟨ b_x ∂_k u_x ⟩ (dash-dotted green line) with k=y (left panel of Fig. <ref>). The nonlinear three-point term (dashed red line) acts as the dominant sink. The other two subdominant terms, one proportional to ∂_k U̅_x (dotted olive line) and the other proportional to ∂_k B̅_x (dotted magenta line), behave like a source and a sink, respectively. The lower panels of Fig. <ref> correspond to the Reynolds stress: (d) R̅_xx, (e) R̅_xy, and (f) R̅_yy. For Reynolds stress, shear acts similarly as in the case of Maxwell stress but with an opposite sign. In addition, the Coriolis force plays a significant role in the evolution of Reynolds stresses. The “stretching" of the positive stress component R̅_xx produces R̅_xy via shear at a rate of qΩ. However, Coriolis force makes the positive stresses, R̅_xx and R̅_yy, act oppositely in the evolution of R̅_xy with the same weighting factor of 2Ω. Since q<2, the combined effects of shear and Coriolis force make the term with R̅_xx (dash-dotted cyan line) behave as a sink in the evolution of R̅_xy. The nonlinear three-point term (dashed red line) acts as a sink here as well. Hence, the term with R̅_yy is the only source term (solid blue line) in the R̅_xy evolution via the Coriolis force. This finding is illustrated in fig:terms_delt_Mij_Rij(e). In fig:terms_delt_Mij_Rij(d), for R̅_xx, the source term is 4ΩR̅_xy (solid blue line) arising through the Coriolis force, whereas the nonlinear three-point term (dashed red line) acts as a sink. Finally in fig:terms_delt_Mij_Rij(f), we see that the shear acting on R̅_xy produces R̅_yy at a rate of 2qΩ. However, the term with R̅_xy, overall, acts as a sink in the evolution of R̅_yy. Since q<2, the combined effects of shear and Coriolis force make the term with R̅_xy behave as a sink (solid blue line). Consequently, the question that arises is how is R̅_yy generated. We find that the most dominant source terms in the R̅_yy evolution are the nonlinear three-point term (dashed red line), the term associated with the gradient of the azimuthal magnetic fields, 2F̅_yk∂_k B̅_y (dotted magenta line) with k=x (right panel of fig:terms_delt_Mxx_Ryy). The other two subdominant terms, one proportional to B̅_k (dash-dotted green line) and the other proportional to ∂_k U̅_y (dotted olive line), behave like a source and a sink, respectively. The overall findings associated with the turbulent transport are summarized schematically in Fig. <ref>. We remind the reader that we are able to delineate this chain of production because of the structure of our model being used in this statistical simulation which helps in making the connections directly between mean fields and the cumulants. The stretching of M̅_xx via shear produces M̅_xy, whose stretching by shear further produces M̅_yy. The large-scale field (here, B̅_y) acts in conjunction with ⟨ b_x ∂_k u_x ⟩ to generate M̅_xx (this can be interpreted essentially as tangling of the mean magnetic field leading to the generation of small-scale fields). For the Reynolds stress, the Coriolis force is responsible for generating R̅_xx from R̅_xy, and R̅_xy from R̅_yy. The outcome of nonlinear interactions between M̅_yy and R̅_yy via the three-point term is the formation of R̅_yy from M̅_yy. The other dominant source term for R̅_yy is the term proportional to the radial gradient of the mean azimuthal magnetic field. Hence, turbulent transport is not possible without large-scale fields, i.e., the mean-field dynamo mechanism is necessary. §.§ Large-scale Dynamo We begin by presenting the overall evolution of the relevant quantities, namely the mean magnetic and velocity fields, both as volume averages (of the energy) and planar-averages. Then we compare the evolution of the different terms in the dynamical equations for the planar-averaged large-scale magnetic fields, to determine which components of the EMF are important for the MRI large-scale dynamo. Thereafter, we specify how we can recover a general expression for the y and z-components of the EMF from our model equations in Section <ref>. We find that a given component of the EMF is a linear combination of terms proportional to mean magnetic fields, the gradient of mean magnetic fields, the gradient of mean velocity fields, and a nonlinear term. With expressions for the EMFs in hand, we set out to investigate the contribution of the various terms to determine the dominant dynamo effects. To do so, we first examine volume-averages of the terms in time windows from both linear and nonlinear regimes, to get a global picture. Next, we examine the planar average of various terms to study the behaviour locally in space. In the latter analysis, we uncover a more sophisticated behaviour of the large-scale dynamo. But overall, we find both types of analysis lead to the same conclusions. The EMF analysis for radial large-scale field generation is in Section <ref> and for vertical large-scale field generation is in Section <ref>. §.§.§ Volume averaged large-scale or mean field energies Consider the evolution of volume-averaged large-scale magnetic and velocity fields. fig:ts_Brms shows the time evolution of the root-mean-square (rms) velocity (U̅_rms) and magnetic (B̅_rms) fields. We find that the MRI-driven turbulence hosts both the large-scale dynamo of velocity and magnetic fields. The amplitude of B̅_rms dominates over that of U̅_rms throughout the entire duration of our longest simulation run, spanning approximately 150 orbits. The zoomed-in view of the initial growth to the early saturation phase of the large-scale fields is also depicted in fig:ts_Brms. The initial growth phase of both fields is observed up to a time of t/T_orb∼ 5.5. Following this initial growth phase, the fields settle into a steady state, indicating the saturation regime. Next, we consider the volume averaged energy density associated with large-scale velocity and magnetic fields, 1/2⟨ρU̅^2 ⟩ and 1/2⟨B̅^2 ⟩, respectively. Fig. <ref> shows the time evolution of the large-scale kinetic (upper panel) and magnetic (lower panel) energy densities with a multiplication factor of two. Fig. <ref> also demonstrates the contribution from the three components of the fields. Important to note that the large-scale magnetic energy dominates over the kinetic one, indicating that the MRI dynamo in accretion disks is characterized by super-equipartition of magnetic energy relative to kinetic energy. Most of the contribution to the large-scale magnetic energy arises from the toroidal mean magnetic field, B̅_y, while the radial and vertical components of the mean magnetic field are of similar magnitude. In large-scale velocity fields, all three components share almost similar magnitudes. Below we explore the generation mechanism of different large-scale magnetic field components extensively. It may be interesting in future work to examine the generation process of U̅ (i.e., the vorticity dynamo) in more detail. §.§.§ Planar averaged large scale or mean fields In order to understand the behaviour of the mean fields locally in space, we perform planar averages. We consider three different planar averages, x–y, y–z, and x–z averaging, to determine the mean magnetic fields B̅(z) or ⟨B⟩_(x,y), B̅(x) or ⟨B⟩_(y,z), and B̅(y) or ⟨B⟩_(x,z) respectively. In the x–y averaged case, the generated field components are B̅_x(z) and B̅_y(z); where B̅_z(z) vanishes to maintain the divergence-free condition. Similarly, B̅_x(x) in y–z averaging and B̅_y(y) in x–z averaging are zero. In fig: Brms_plane_ave, we present the time evolution of the root-mean-square planar-averaged fields. To obtain these quantities, we first square the planar-averaged fields. Then we perform further averaging over the remaining third direction and then take the square root. The most prominent large-scale field observed is B̅_y (solid orange line), resulting from the x–y-averaging. Notably, in the saturation regime, the rms value of B̅_y is around four times greater than that of B̅_x (solid blue line). The y–z averaging reveals significant B̅_y as well, represented by the dashed green line. The y–z averaged B̅_z (dashed red line) at the beginning of the growth phase reflects the initial condition of B̅_z = B_0 sin (k_x x). During the saturation stage, the y–z averaged fields show that |B̅_y| is approximately one order stronger compared to |B̅_z|. Conversely, when applying the x–z averaging (represented by the dotted lines), resulting mean fields B̅_x and B̅_z do not exhibit a dynamo growth and further decay to small values. The weakness of x–z-averaged fields renders the MRI-generated large-scale fields largely axisymmetric. We then shift our focus to the most-studied large-scale magnetic fields B̅_x(z) and B̅_y(z), obtained from x–y-averaging. fig:B_slice illustrates their temporal evolution along the abscissa and spatial variation along the ordinate, with the color scale representing the field strengths. We identify three distinct stages in this evolution: (a) the initial growth phase extending up to t/T_orb∼ 5.5, (b) the intermediate or initial saturation phase from t/T_orb∼ 5.5 to t/T_orb∼ 25, and (c) the fully nonlinear saturation phase after t/T_orb≳ 25. Notably, the appearance of short dynamo cycles during the intermediate phase indicates a quasi-linear nature of the dynamo. To understand the MRI dynamo mechanism, we require both x–y and y–z averaging (given the significant mean fields arising from both of these planar averages). Here, we present the time evolution of the mean field equations in both kinds of averaging. The mean field equations in x–y averaging are given by ∂_t B̅_x = - ∂_z ℰ̅_y - U̅_z∂_zB̅_x + B̅_z∂_zU̅_x , ∂_t B̅_y = - qΩB̅_x + ∂_z ℰ̅_x - U̅_z∂_zB̅_y + B̅_z∂_zU̅_y . The mean field equations in y–z averaging are given by ∂_t B̅_y = - ∂_x ℰ̅_z - U̅_x∂_xB̅_y + B̅_x∂_xU̅_y , ∂_t B̅_z = ∂_x ℰ̅_y - U̅_x∂_xB̅_z + B̅_x∂_xU̅_z . Here, the main contributions arise from the shear term (-qΩB̅_x), the advection term (U̅·∇B̅), the stretching term (B̅·∇U̅), and the different components of the EMF: ℰ̅_x = (F̅_yz - F̅_zy), ℰ̅_y = (F̅_zx - F̅_xz), and ℰ̅_z = (F̅_xy - F̅_yx). The shear term only appears on the x–y averaged azimuthal field evolution equation. It will not operate in the y–z averaged azimuthal field equation because of the divergence-free magnetic field condition, i.e., ⟨ B_x ⟩_(y,z)≃ 0. To study the contribution of each term to the evolution of the mean magnetic fields, we multiply B̅_i on both sides of the ∂_t B̅_i equations. The resultant individual terms of equations in B_i∂_t B_i are shown in Figs <ref> and <ref>. Here, we use three different terminologies to describe the role of each term: the `source'-term has positive contributions throughout, the `sink'-term contributes negatively, and the `dual'-term can be either positive or negative with time. In fig:Bi_delt_Bi, we examine the behaviour of the individual terms in eq:meanB_xyeq:meanB_yz in time, for both x–y (top panels) and y–z averaging (bottom panels). These terms are evaluated at z=-0.15 (top panels) and at x=-0.15 (bottom panels), for x–y and y–z averaging, respectively. For all the cases, the corresponding advection and stretching terms are negligible. The x–y averaged mean field B̅_x(z) (top left panel of Fig. <ref>) fully arises from the vertical variation of the azimuthal EMF, ℰ̅_y. On the other hand, the field B̅_y(z) (top right panel of Fig. <ref>) results from a combination of the shear term and the vertical variation of the radial EMF, ℰ̅_x. The shear term acts as a source (traditionally, known as the Ω-effect), whereas the radial EMF has a sink effect. In the y–z averaged analysis, both B̅_y(x) and B̅_z(x) arise due to their respective EMF terms in the induction equation: B̅_y(x) arises from the radial variation of ℰ̅_z (bottom left panel of Fig. <ref>), whereas B̅_z(x) arises from the radial variation of ℰ̅_y (bottom right panel of Fig. <ref>). The sharp decay of B̅_z(x) at the beginning indicates the destruction of the initial field configuration B̅_z = B_0 sin (k_x x). Next we examine the behaviour of the terms in eq:meanB_xyeq:meanB_yz locally in space instead. In Figs <ref> and <ref>, we show the individual terms in eq:meanB_xyeq:meanB_yz, for both x–y (top panels) and y–z averaging (bottom panels) as function of z and x, respectively. Fig. <ref> corresponds to the MRI growth phase, evaluated at t/T_orb∼ 5, and Fig. <ref> corresponds to the fully nonlinear saturation regime, evaluated at t/T_orb∼ 97. We use the same line style and color for individual terms as that in Fig. <ref>. The overall conclusions remain the same as discussed in the previous paragraph. The azimuthal EMF, ℰ̅_y, generates the field B̅_x(z) (top left panel of Fig. <ref>), which in turn drives B̅_y(z) (top right panel of Fig. <ref>) through the Ω-effect. The radial EMF, ℰ̅_x, has a sink effect, reducing the energy of B̅_y(z). In the nonlinear regime, the radial EMF is seen to dominate over the shear term at the given instance in time, and hence the field B̅_y(z) decays at that instant (top right panel of Fig. <ref>). In y–z averaging, both B̅_y(x) (bottom left panel of Figs <ref> and <ref>) and B̅_z(x) (bottom right panel of Figs <ref> and <ref>) arise due to their respective EMF terms in the induction equation. The contribution from the advection and stretching terms involving only mean fields are negligible for all the cases. Thus, we find that the behaviour of all the terms (in Eqs. <ref> and <ref>) locally in time is consistent with that locally in space. In summary, the EMFs ℰ̅_y and ℰ̅_z play significant roles in dynamo, whereas ℰ̅_x acts like a sink on B̅_y(z). To find the solution to the MRI dynamo problem, we have to formulate the key components of the EMF: ℰ̅_y and ℰ̅_z. §.§ Construction of the Electromotive Force In traditional mean-field dynamo theory, the turbulent electromotive force (EMF) is commonly expressed as a linear combination of the mean magnetic field and its derivatives: ℰ̅_i = α_ijB̅_j + β_ijkB̅_j,k , where the tensor components α_ij and β_ijk are known as turbulent transport coefficients. However, this assumption of expansion solely with respect to the mean magnetic field may not be sufficient <cit.>, as the form of the EMF directly emerges from the assumption of U̅ = 0, disregarding the influence of mean velocity fields. In the context of MRI-driven turbulence, both the large-scale vorticity dynamo and the large-scale magnetic field dynamo are integral components of the overall turbulent behavior <cit.>. Therefore, in constructing the EMF, it is essential to account for the effects of mean velocity fields alongside the mean magnetic field. Another challenge in mean-field dynamo theory is determining the numerous unknown transport coefficients involved in the mean EMF. Extracting data from simulations, specifically B̅ and ℰ̅, allows for the estimation of these coefficients. However, measurement results often suffer from high levels of noise. To improve the signal and reduce the noise, certain coefficients are typically assumed to be negligible <cit.>. However, the appropriateness of such fitting assumptions has been a subject of debate <cit.>. To overcome both limitations, we propose a novel approach that constructs the key components of the EMF in a self-consistent manner without making any assumptions. We utilize the interaction terms arising from the Coriolis force and background shear in the evolution equations for the Faraday tensors to construct the EMF. More detailed information can be found in Appendix <ref>. Specifically, the azimuthal EMF, ℰ̅_y, can be expressed as ℰ̅_y = -1/q(2-q)Ω[ { - q 𝒟_t F̅_yz + (2-q) 𝒟_t F̅_zy} + { qF̅_yk +(2-q)F̅_ky}∂_k U̅_z - { qF̅_kz +(2-q)F̅_zk}∂_k U̅_y + 1/ρ{qM̅_zk +(2-q)R̅_zk}∂_k B̅_y - 1/ρ{qR̅_yk +(2-q)M̅_yk}∂_k B̅_z + B̅_k { q (⟨ u_y ∂_k u_z ⟩ + ⟨ b_z ∂_k b_y ⟩/μ_0 ρ) - (2-q) (⟨ u_z ∂_k u_y ⟩ + ⟨ b_y ∂_k b_z ⟩/μ_0 ρ) } + 𝒯̅_y] . It is worth noting that the EMF consists of terms that are proportional to: (a) mean magnetic fields, (b) gradient of mean magnetic fields, (c) gradient of mean velocity fields, and (d) nonlinear three-point terms (𝒯). The proportionality coefficients depend on factors such as rotation, shear rate, and the correlators associated with different fluctuating fields. Similarly, the vertical component of the EMF, ℰ̅_z, can be derived, and its mathematical expression is available in Appendix <ref>. For a comprehensive understanding, we present the individual components of the EMF obtained from the volume-averaged analysis, as depicted in fig:ts_emf. It is evident that the EMF components ℰ̅_x and ℰ̅_y exhibit a cyclic pattern over time, with alternating positive and negative values. In contrast, the EMF component ℰ̅_z consistently remains negative throughout the entire duration of the analysis. §.§ Generation of Radial Magnetic Fields We have seen that the x–y averaged mean field B̅_x(z) is solely determined by the vertical variation of the azimuthal EMF, ℰ̅_y. The main challenge in mean-field dynamo theory is to identify the term responsible for generating B̅_x via ℰ̅_y. In fig:BxdEydz_xy_t, we present individual terms of ℰ̅_y (Eq. <ref>) in the MRI growth to the early saturation phase. We compute these terms at z=-0.15. To assess the contribution of each term in the evolution of B̅_x (Eq. <ref>), we multiply -B̅_x on both sides of the equation for ∂_z ℰ̅_y. Two crucial curves that aid in determining whether the magnetic field is growing or decaying with time are the EMF term, depicted as the dashed black line with star markers, and B̅_x ∂_t B̅_x, illustrated as the dash-dotted red line with tri-down markers. We observe that the field B̅_x(z), represented by the dotted blue line, undergoes amplification during the growth phase (with negative growth and in opposite phase to B̅_y(z), shown as the dotted orange line) in the range of t/T_orb≈ 5 to 5.6. Subsequently, it decays, leading to saturation. The primary driver for the growth of B̅_x is the term proportional to ∂_k B̅_y (depicted by the solid grey line with circle markers). In particular, the k=z component of this term plays a crucial role, as illustrated in fig:BxdEydz_xy_tb. During the growth phase, there are two additional source terms. One originates from the term proportional to ∂_k B̅_z (illustrated by the solid blue line) with k=x (see fig:BxdEydz_xy_tc), while the other arises from the term proportional to ∂_k U̅_z (represented by the solid green line) with k=x (fig:BxdEydz_xy_td). In the initial decay phase (around t/T_orb≈ 5.6 to 7), the most significant role is played by the nonlinear three-point term (depicted by the solid olive line with triangle markers). The term proportional to ∂_kU̅_z with k=x (fig:BxdEydz_xy_td), which previously acted as a source during the growth phase, now acts as a sink. These contributions collectively lead to the saturation of B̅_x. Notably, the terms proportional to B̅_i are negligible in both the growth and initial decay phases. Next, we take time averages from t/T_orb = 5→ 5.5, of all the terms considered in the previous figure (fig:BxdEydz_xy_t), in the MRI growth phase and show their behaviour locally in space. Such a study can explain how the x–y averaged field B̅_x(z) is generated in detail. In Fig. <ref>, the individual terms of ∂_z ℰ̅_y vary in z, and we use the same line style and color for each term as that in Fig. <ref>. We again multiply -B̅_x(z) on both sides of the ∂_z ℰ̅_y equation to understand the contribution of each term to the evolution of B̅_x, following eq:meanBx_xy. We find again that the dominant source term is the term proportional to ∂_k B̅_y with k=z (top middle panel of Fig. <ref>). Some contributions from the terms proportional to ∂_k B̅_z with k=x (bottom middle panel of Fig. <ref>) and ∂_k U̅_z with k=x (bottom right panel of Fig. <ref>) also arise in the growth of B̅_x(z), but they are not acting as sources throughout z. Next, we study the dynamo in nonlinear regime. Using Fig. <ref>, we describe the mechanism by which the field B̅_x(z) grows and decays alternatively, via the vertical variation of ℰ̅_y. We have seen that the azimuthal EMF, ℰ̅_y, maintains a cyclic nature, i.e., the magnitude of ℰ̅_y can be either positive or negative at different instances of time (e.g., Fig. <ref>). Here, we perform the x–y averaged analysis at different times when ℰ̅_y can be either positive or negative, to obtain an overall behaviour of both, growth and decay of B̅_x(z), locally in space. In particular, we would like to know whether the terms which were responsible for growth of the large-scale fields continue to persist in the nonlinear regime. The computations are performed at t/T_orb∼ 80 (top left), t/T_orb∼ 97 (top right), t/T_orb∼ 43 (bottom left), and t/T_orb∼ 107 (bottom right). The top two panels of Fig. <ref> correspond to the negative ℰ̅_y, whereas the bottom two panels are for the positive ℰ̅_y. In the top left panel of Fig. <ref>, we see that the field B̅_x(z) grows along z>0. The term proportional to ∂_k U̅_z (solid green curve) with k=x (not shown here) is the dominant term responsible for the growth of B̅_x(z). The other two source terms are those proportional to ∂_k B̅_y (solid grey curve) with k=z (not shown here) and ∂_k U̅_y (solid orange curve) with k=x (not shown here). The nonlinear three-point term (solid yellow/light-green curve) and the term proportional to ∂_k B̅_z (solid blue curve) with k=x (not shown here) behave like sinks. The terms proportional to B̅_i (solid purple, brown, and pink curves for i=x, y, and z, respectively) are negligible. In the top right panel of Fig. <ref>, the field B̅_x(z) is seen to be decaying along all z. The nonlinear three-point term plays a significant role in reducing the energy of B̅_x. The term proportional to ∂_k U̅_y (solid orange curve) with k=x (not shown here) acts like a sink here also. The terms proportional to B̅ are negligible. In the bottom two panels of Fig. <ref>, we see that the field B̅_x(z) grows and decays cyclically in z. In both cases, the overall behaviour of different terms remains the same as before. Again, the term proportional to ∂_k B̅_y (solid grey curve) with k=z (not shown here) acts like a source throughout the z, whereas the terms associated with ∂_k B̅_z (solid blue curve) with k=x (not shown here) and time-derivative (solid red curve) have sink effects mostly. There are two significant behaviours in these two cases. (a) The terms proportional to B̅_i (solid purple, brown, and pink curves for i=x, y, and z, respectively) are not negligible here, unlike previous cases. The term proportional to B̅_y (solid brown curve) appears to follow the signal, i.e., the term B̅_x ∂_t B̅_x. On the other hand, the term proportional to B̅_x (solid purple curve) appears opposite to the signal. (b) The nonlinear three-point term (solid olive line) behaves like either a source or a sink. Similar to the term proportional to B̅_y (solid brown curve), the nonlinear term also follows the pattern of B̅_x ∂_t B̅_x with much higher amplitudes. In summary, the growth and the nonlinear saturation of the x–y averaged field B̅_x(z) arises through the vertical variation of the azimuthal EMF, i.e., ∂_z ℰ̅_y. The EMF ℰ̅_y consists of four different types of terms proportional to the mean magnetic fields, the gradient of mean magnetic fields, the gradient of mean velocity fields, and nonlinear three-point terms. The proportionality coefficients are functions of the shear rate, rotation, and correlators associated with different fluctuating fields. The term proportional to ∂_z B̅_y plays a significant role in the growth of B̅_x both in the growth and saturation regimes. In the MRI growth regime, the term proportional to ∂_x B̅_z also grows B̅_x(z). The decay of B̅_x is primarily due to the three-point term, which is the reason for the nonlinear saturation of B̅_x. The roles of certain terms depend on the sign of ℰ̅_y. In the MRI nonlinear regime, the term proportional to ∂_x U̅_z helps in the growth of B̅_x for ℰ̅_y < 0, whereas it has a negligible sink effect for ℰ̅_y > 0. The term proportional to B̅_y is negligible for ℰ̅_y < 0, whereas it has a dual effect (i.e., both source and sink at the same time but at different points in space) following the pattern of the signal, i.e., the term B̅_x ∂_t B̅_x, for ℰ̅_y > 0. The nonlinear three-point term acts like turbulent resistivity for ℰ̅_y < 0, whereas it has a dual effect acting as both source and sink for ℰ̅_y > 0. Next, we explore the term proportional to ∂_z B̅_y of the EMF ℰ̅_y (see equation <ref>) in more detail. The proportionality coefficient carries physical insight for the B̅_x(z) generation mechanism. As the coefficient is a function of shear rate, rotation, and correlators associated with kinetic and magnetic fluctuations (more specifically, R̅_zz and M̅_zz, respectively), the mechanism is named as `rotation-shear-current effect.' rotation-shear-current effect: The dynamo mechanism responsible for generating B̅_x(z) from B̅_y(z) through the rotation-shear-current effect relies on the presence of correlators M̅_zz and R̅_zz. Understanding the formation process of these correlators is crucial for establishing the connections between the dynamo process and the angular momentum transport in the system. To investigate this, we examine the individual terms appearing in the evolution equations for M̅_zz (Eq. <ref>) and R̅_zz (Eq. <ref>), which are displayed in the upper panels of fig:Mzz_Rzz_t5fig:Mzz_Rzz_t20. Specifically, fig:Mzz_Rzz_t5 corresponds to the MRI growth phase, obtained through time averaging from t/T_orb = 5 → 5.5, while fig:Mzz_Rzz_t20 represents the nonlinear phase, evaluated at t/T_orb = 20. Physically, M̅_zz and R̅_zz represent turbulent magnetic and kinetic energy densities (multiplied by two) in the vertical components of the fields, respectively. Consequently, both M̅_zz and R̅_zz remain positive throughout. It makes the positive term in ∂_t M̅_zz as a source, whereas the negative term behaves like a sink. The same holds for the terms in ∂_t R̅_zz. By identifying the dominant source terms from the upper panels of fig:Mzz_Rzz_t5fig:Mzz_Rzz_t20, we examine their components in the lower panels of the same figures. We see that the dominant source term for M̅_zz is the stretching term, 2M̅_zk∂_k U̅_z (blue dotted line in Figs. <ref>a and <ref>a) with k=x (Figs. <ref>c and <ref>c), whereas the nonlinear three-point term (red dashed line in Figs. <ref>a and <ref>a) behaves as the dominant sink. This behavior remains consistent in both the growth and nonlinear phases of turbulence. Similar processes are seen for the R̅_zz evolution—the stretching term, -2R̅_zk∂_k U̅_z (blue dotted line in Figs. <ref>b and <ref>b) with k=x (Figs. <ref>d and <ref>d), acts as a source, whereas the nonlinear three-point term (red dashed line in Figs. <ref>b and <ref>b) turns out to be the sink as usual. Thus, the presence of a mean (vertical) velocity field is necessary for the operation of the rotation-shear-current effect. In other words, mean magnetic field dynamo is rendered inoperative without mean velocity field dynamo. Further exploration of the generation process of M̅_xz and R̅_xz is needed for a comprehensive understanding. In fig:terms_delt_Mxz, we show the individual terms that appear in the dynamical equation for M̅_xz. Since M̅_xz changes sign with spatial and temporal coordinates, we multiply M̅_xz on both sides of the equation for ∂_t M̅_xz to understand the contribution of each term. The resultant individual terms of equation for M̅_xz∂_t M̅_xz are shown in the left panel of fig:terms_delt_Mxz. Once we identify the dominant source terms, we further demonstrate the components of such specific source terms in the middle and right panels of fig:terms_delt_Mxz. We see that the dominant source terms for M̅_xz are the stretching terms: M̅_xk∂_k U̅_z and M̅_zk∂_k U̅_x (shown in dotted light green/yellow). The most significant contribution in the correlator M̅_xk∂_k U̅_z arises from k=x (middle panel). For the correlator M̅_zk∂_k U̅_x, the dominant contribution arises from k=z (rightmost panel). The nonlinear three-point term and the terms associated with the spatial gradient of mean magnetic fields act like a sink here. In summary, M̅_xx and M̅_zz act in conjunction with ∂_x U̅_z and ∂_z U̅_x respectively to produce M̅_xz. Next, we analyse the generation of R̅_xz (z), so we plot the individual terms that appear in R̅_xz∂_t R̅_xz equation in the MRI growth and nonlinear regimes as shown in fig:terms_delt_Rxz. Three different panels of fig:terms_delt_Rxz correspond to the three different instants of time at which the computations are performed: t/T_orb≃ 5 (left panel), t/T_orb≃ 50 (middle panel), and t/T_orb≃ 100 (right panel). It is difficult to identify any specific source term for R̅_xz at the initial phase from the left panel of fig:terms_delt_Rxz. We see that the dominant source term for R̅_xz in the nonlinear regime (middle and right panels) is the Coriolis force term: 2ΩR̅_yz (solid blue line). Hence, the twisting of R̅_yz via the Coriolis force produces R̅_xz at a rate of 2Ω. The nonlinear three-point term (dashed red line) behaves as the dominant sink for R̅_xz. To complete the dynamo cycle, we describe below how R̅_yz is produced. Finally, in fig:terms_delt_Ryz, we perform the x–y average analysis at three different instants of time: t/T_orb≃ 5 (left panel), t/T_orb≃ 50 (middle panel), and t/T_orb≃ 100 (right panel), to examine the generation of R̅_yz. Similar to R̅_xz, it is difficult to identify any specific source term for R̅_yz at the initial phase (left panel). However, in the nonlinear regime (middle and right panels), it is apparent that the dominant source term for R̅_yz is the term with mean magnetic field gradients (dotted magenta line), and once again the nonlinear three-point term (dashed red line) acts as a sink. Mathematically, the complete source term for R̅_yz is expressed as (F̅_yk∂_k B̅_z + F̅_zk∂_k B̅_y). However, the contribution from the term F̅_zk∂_k B̅_y is found to be negligible. Instead, the sole contribution arises from F̅_yk∂_k B̅_z with k=y. This finding is illustrated in fig:source_terms_delt_Ryz for two different times, t/T_orb≃ 50 (left panel) and t/T_orb≃ 100 (right panel). In fig:mri_rsc, we provide a schematic which summarizes the chain of production leading to the rotation-shear-current effect. At the magnetic end of the chain, the term involving azimuthal mean field, B̅_y leads to eventual production of M̅_zz which is one part of the rotation-shear-current effect. At the kinetic end of the chain, the vertical mean field is involved in leading up to the production of R̅_zz, which is the other part of the rotation-shear-current effect. Thus, the self-sustaining cycle of dynamo is established connecting both the azimuthal and vertical mean magnetic fields to correlators that are responsible for the production of the radial mean magnetic field. In this picture, the production of the vertical mean field has not yet been delved into. We do so in the next section. §.§ Generation of Vertical Magnetic Fields We have seen that the vertical mean-field, B̅_z(x), arises in the y–z averaged analysis due to the radial variation of the azimuthal EMF, ∂_x ℰ̅_y. Here, we discuss the terms responsible for generating B̅_z via ℰ̅_y. In the left panels of fig:BzdEydx_yz_t, we show the individual terms of ℰ̅_y in the MRI growth to the early saturation phase. To enhance visual clarity, we have distributed the numerous terms of the EMF across two left panels. The middle and right panels of fig:BzdEydx_yz_t illustrate the contributions from different components associated with the terms proportional to the respective field gradients. To understand the contribution of each term on the evolution of B̅_z (as described in Eq. <ref>), we multiply B̅_z on both sides ∂_x ℰ̅_y equation (i.e., after taking the radial gradient of Eq. <ref>). We evaluate these terms at x=-0.15. The same color and line style are used for each term, as indicated in Fig. <ref>. Notably, we have utilized markers only for the most significant curves. The two crucial curves that determine the growth or decay of the magnetic field over time are the EMF term (represented by a dashed black line with star markers) and B̅_z ∂_t B̅_z (shown as a dash-dotted red line with tri-down markers). Positive values of these terms indicate the growing phase of B̅_z(x), while negative values suggest a decaying phase. The dominant term responsible for the growth of B̅_z is the one proportional to ∂_k U̅_z (depicted by a solid green line with circle markers), where k=x (refer to the bottom right panel of fig:BzdEydx_yz_t). Conversely, three terms act as sinks in the evaluation of B̅_z(x): the term proportional to ∂_k B̅_z (shown as a solid blue line with triangle markers) with k=x (see top right panel of fig:BzdEydx_yz_t), the term proportional to ∂_k U̅_y (illustrated by a solid orange line with square markers) with k=x (refer to the bottom middle panel of fig:BzdEydx_yz_t), and the nonlinear three-point term (displayed as a solid olive line). Consequently, these terms contribute to the energy reduction of B̅_z(x). It is worth noting that the terms proportional to B̅_i have negligible role on the evolution of B̅_z(x). Next, we investigate the local behavior of the terms appearing in the azimuthal EMF during the MRI growth phase by taking time averages from t/t_orb=5→ 5.2. Such a study can explain how the y–z averaged field B̅_z(x) is generated locally via the radial variation of ℰ̅_y. To understand the individual contributions of these terms to the evolution of B̅_z(x), we again multiply B̅_z(x) on both sides of the ∂_x ℰ̅_y equation (i.e., after taking the radial gradient of Eq. <ref>), as described in Eq.<ref>. In the left panels of fig:BzdEydx_yz_t5, we present the variations of the individual terms of B̅_z ∂_x ℰ̅_y as a function of x. Similar to fig:BzdEydx_yz_t, we distribute the numerous terms of the EMF across two left panels, while maintaining consistent line styles and colors for each term. We find that the field B̅_z(x) experiences growth in the regions x ≃ -0.4 → -0.25 and x ≃ 0.1 → 0.3. Again, the dominant term responsible for the growth of B̅_z(x) is the one proportional to ∂_k U̅_z (depicted by a solid green line with circle markers), where k=x (see the bottom right panel of fig:BzdEydx_yz_t5). Conversely, the term proportional to ∂_k U̅_y (illustrated by a solid orange line with square markers) with k=x (refer to the bottom middle panel of fig:BzdEydx_yz_t5) acts as the dominant sink term. Finally, we investigate the dynamo mechanism underlying the generation of y–z-averaged fields B̅_z(x) in the nonlinear regime of MRI. Our focus is to determine whether the terms that were responsible for the growth of large-scale fields continue to play a role in the nonlinear regime. In fig:BzdEydx_yz_t600, we perform a y–z-averaged analysis at t/t_orb≃ 97 to examine the behavior of individual terms in the B̅_z ∂_x ℰ̅_y equation as a function of x. To maintain consistency, we use the same line styles and colors for each term as indicated in fig:BzdEydx_yz_t5. The two overlapping curves in the left panel of fig:BzdEydx_yz_t600, one from the EMF term and the other from B̅_z ∂_t B̅_z, provide insights into the growth or decay of the field B̅_z(x) with respect to x. Remarkably, the overall results remain consistent in the nonlinear regime. The term proportional to ∂_k U̅_z (depicted by a solid green line with circle markers) with k=x (see the bottom right panel of fig:BzdEydx_yz_t600) continues to be the dominant term responsible for the growth of B̅_z(x). Conversely, the decay of B̅_z(x) is primarily attributed to the term proportional to ∂_k B̅_z (shown as a solid blue line with triangle markers) with k=x (see the top right panel of fig:BzdEydx_yz_t600). In summary, the growth and the nonlinear saturation of the y–z-averaged field B̅_z(x) are driven by the radial variation of the azimuthal EMF, ∂_x ℰ̅_y. In particular, the term proportional to ∂_x U̅_z of the EMF ℰ̅_y plays a dominant role in generating B̅_z(x). The proportionality coefficient of this term depends on the shear rate, rotation, and specific components of the Faraday tensor, as given by (see Eq. <ref>) - 1/Ω[ 1/(2-q)F̅_yx + 1/qF̅_xy] . We refer to this dynamo mechanism for the generation of B̅_z (x) as the “rotation-shear-vorticity effect." It is important to note that this mechanism is fundamentally distinct from the traditional cross-helicity effect <cit.>, where the turbulent cross-helicity (defined as the cross-correlation between the turbulent velocity and magnetic field, ⟨ u · b⟩) serves as the transport coefficient coupling with the large-scale vorticity. In the rotation-shear-vorticity effect, the off-diagonal components of the Faraday tensor, specifically F̅_xy and F̅_yx, play a primary role. § DISCUSSION The primary objective of this work is to gain a better understanding of the physical processes involved in sustaining MRI turbulence and dynamo in accretion disks. Despite many theoretical and computational studies, the fundamental principles behind these phenomena remain unclear. One of the main reasons for this is that the mean-field dynamo and angular momentum transport problems have traditionally been treated independently <cit.>. The transport theory for angular momentum has not taken into account the evolution of large-scale magnetic fields <cit.>, while the mean-field dynamo theory has not considered transport dynamics <cit.>. In addition, both theories have ignored the feedback from the evolution of mean velocity fields. However, direct numerical simulations have shown the existence of a large-scale dynamo associated with velocity and magnetic fields simultaneously in MRI-driven turbulence <cit.>. To better understand the exact nature of these interactions, one needs to develop a unified mean-field theory for MRI. With this aim, we construct a single coupled model for turbulent accretion disks and perform direct statistical simulations in a zero net-flux unstratified shearing box using statistical closure approximations. Mean-field dynamo theory is a widely used framework for examining the in situ origin of large-scale magnetic field growth and saturation. The electromotive force, a correlation between fluctuating velocity and magnetic fields, is responsible for dynamo action. In mean-field theories, the EMF is typically assumed to be a linear function of the mean magnetic field and its spatial derivatives, with the proportionality coefficients usually treated as tensors. However, this assumption may not be sufficient to fully capture the complex physical processes involved in magnetic field generation and sustenance. Several studies have shown that an additional term proportional to the spatial derivative of the mean velocity field enters the EMF equation, which can lead to rapid growth of mean magnetic fields <cit.>. Similarly, whenever an additional term participates in the EMF equation, the dynamics of magnetic field growth and saturation can change dramatically. Therefore, it is crucial to properly account for the effect of all contributions in the EMF equation to fully understand the physics of magnetic field generation and sustenance. Here, we identify a novel possibility for large-scale magnetic field generation in unstratified MRI-driven turbulent plasmas: the rotation-shear-current (RSC) effect. The mechanism arises through an off-diagonal turbulent resistivity η_yx, which has a favorable negative sign to cause mean-field dynamo action, rather than being positive for diffusion. The basic idea is that in the presence of shear and rotation, small-scale kinetic and magnetic fluctuations produce η_yx in the following form (the coefficient of ∂_z B̅_y in Eq. <ref>) η_yx = - 1/ρΩ[1/2-qM̅_zz + 1/qR̅_zz]. This is for the first time we have identified the exact expression for η_yx. The respective correlators associated with magnetic and kinetic fluctuations are M̅_zz and R̅_zz, which are always positive. The factor `two' arises due to rotation via the Coriolis force. Hence, for a Keplerian shear flow (i.e., q=1.5), both magnetic (η_yx^b) and kinetic (η_yx^u) contributions to the RSC effect have favorable negative sign. It is important to note that the term associated with the RSC effect is distinct from the Ω× J or Rädler-effect <cit.>. Also, the RSC effect differs fundamentally from the traditional shear-current (SC) effect in which rotation is absent <cit.>. The SC effect has been controversial, with mutual and seperate disagreements among theories and simulations. Below we discuss various conflicts associated with SC effect. The traditional SC effect is a potential nonhelical large-scale dynamo driven by off-diagonal turbulent resistivity η_yx in the presence of a large-scale velocity shear without any rotation. A negative sign of η_yx is necessary for coherent dynamo action by the SC effect. However, it remains a matter of debate whether the contributions from the turbulent kinetic and magnetic parts to η_yx have a preferred sign or not, and which one dominates. Among analytical works, those employing a spectral-τ closure found that both η_yx^b and η_yx^u have favorable negative signs to cause dynamo action <cit.>. In contrast, the second-order correlation approximation <cit.> and quasi-linear calculations <cit.> disagreed with the existence of the kinetic SC effect. For magnetic shear-current (MSC) effect, the analytical calculations using second-order correlation approximation agree with previous spectral-τ calculations that η_yx^b has favorable negative sign, and the magnetic part substantially dominates over the kinetic part <cit.>. Zhou and Blackman <cit.> resolve some of these theoretical discrepancies (atleast at low to moderate Re ∼ 10) by showing that the kinetic contribution η_yx^u is sensitive to the kinetic energy spectral index and can transist from positive to negative values with increasing Re, whereas the magnetic contribution η_yx^b remains always negative. However, numerical simulations do not fully agree with theory, and sometime mutually contradict. There are broadly two methods employed to determine the turbulent transport coefficients from simulations: the test-field method and the projection method. In kinetically forced quasi-linear simulations using projection method, it has been found that η_yx^u is positive with only shear, and negative when a Keplerian rotation is added <cit.>. Conversely, nonlinear test-field method in MHD burgulence (i.e., ignoring the thermal pressure gradient) with kinetic forcing has reported a negative η_yx^u for the non-rotating case, but did not explore the case including rotation <cit.>. For magnetic contributions, magnetically forced quasi-linear simulations using projection method found that η_yx^b < 0 either with or without Keplerian rotation <cit.>. Unfortunately, nonlinear test-field method in MHD burgulence with magnetic foring found that η_yx^b > 0 for non-rotating shearing cases <cit.>. To resolve the above mentioned discrepancies, we provide here the exact expressions for η_yx (Eq. <ref>) which describes the role of rotation and shear parameters to the contributions of kinetic and magnetic parts. As we have already mentioned that for a differentially rotating Keplerian flow (as in the case of MRI turbulence) both η_yx^b and η_yx^u have favorable negative signs to cause dynamo action. Now, in the absence of rotation, relevant to the traditional SC effect and the MSC effect, η_yx reduces to the form as η_yx = - ( R̅_zz - M̅_zz)/qρΩ. We see that the kinetic contribution η_yx^u has a favorable negative sign, whereas the magnetic contribution has a wrong sign for dynamo action. Interestingly, they will exactly cancel each other in the limit of M̅_zz∼R̅_zz, which will make the SC effect inoperative (i.e., η_yx≃ 0). It supports the conclusions of Ref. <cit.> that there is no evidence for MSC-effect-driven dynamo in magnetically forced, non-rotating MHD burgulence, but kinetic SC effect has favorable negative sign when forced kinetically. It also explains the results associated with non-rotating, unstratified, compressible MHD simulations with driven turbulence using a compressible test-field method that η_yx to be slightly negative or positive but statistically not different than zero, concluding no evidence of coherent SC effect <cit.>. In addition to forced turbulence, there is growing evidence for the presence of the RSC effect in unstratified, zero net-flux shearing-box simulations of MRI-driven turbulence. Both finite volume code <cit.> and moving mesh code <cit.> simulations have observed a large-scale dynamo with a negative value of η_yx. These findings contrast with the results of Ref. <cit.>, who used a smooth particle hydrodynamics code and observed slightly positive or nearly zero values of η_yx in their zero net-flux, unstratified simulations. The discrepancy can be attributed to the significantly weaker mean fields in their simulations, which can impact the manifestation of the RSC effect. Next, we delve into the mechanisms by which the correlators associated with fluctuations drive the RSC effect. As we discussed earlier, the correlators involved in the RSC effect are M̅_zz and R̅_zz, which correspond to the magnetic and kinetic aspects, respectively. We have discussed how these correlators interact with shear and rotation to produce off-diagonal turbulent resistivity η_yx with the appropriate sign for the large-scale dynamo. Understanding the generation of these correlators in the context of self-sustained MRI-driven turbulence is crucial. We uncover a significant revelation: the presence of a large-scale vorticity dynamo is essential for their production. Notably, the dominant contribution arises from the mean vertical velocity fields. Moreover, at the magnetic end of the chain, the term involving azimuthal mean magnetic fields plays a significant role in generating M̅_zz—the magnetic part of the RSC effect, while at the kinetic end of the chain, the term involving vertical mean magnetic fields takes charge in producing R̅_zz—the kinetic part of the RSC effect (see, fig:mri_rsc for a more comprehensive depiction). Consequently, a self-sustaining dynamo cycle is established, linking the azimuthal and vertical magnetic fields to the correlators that give rise to radial magnetic fields through the RSC effect. Finally, we address the generation of vertical magnetic fields arising in the y–z-averaged analysis. For a given initial vertical field, the MRI can be initiated locally. However, in the absence of any large-scale dynamo action, the resulting MRI turbulence tends to disrupt the original vertical field, potentially leading to the cessation of the MRI. While considerable research on MRI dynamo mechanisms has focused on the generation of horizontally (x–y) averaged fields, the persistence of large-scale vertical magnetic fields in MRI-driven turbulence remains an intriguing question. Our findings from the y–z-averaged analysis are consistent with results from global cylindrical MRI simulations <cit.> and local shearing box simulations <cit.>, where the large-scale fields arise entirely from the EMF. In particular, the vertical mean field is driven by the radial variation of the azimuthal EMF. By formulating a general expression for the EMF, we have identified a novel dynamo mechanism responsible for the generation of large-scale vertical magnetic fields, referred to as the rotation-shear-vorticity effect. This mechanism critically depends on the presence of a large-scale vorticity dynamo. Specifically, the azimuthal EMF contains a term proportional to the radial gradient of the vertical mean velocity field, which drives this dynamo mechanism. The exact form of the proportionality coefficient is given in eq:rotating-shear-vorticity coefficient. This coefficient arises from the interaction of the xy- and yx-components of the Faraday tensor with rotation and shear. Overall, these new findings open up exciting avenues for the advancement of mean-field dynamo theory and invite further exploration. § CONCLUSIONS In this paper, we investigated the phenomena of MRI turbulence and dynamo in a zero net-flux unstratified shearing box, employing novel DSS methods. Our main objective was to develop a unified mean-field model that combines the traditionally decoupled problems of angular-momentum transport and the large-scale dynamo in MRI-driven turbulent flows, with specific emphasis on the standard Keplerian accretion disks. We consider the dynamics of turbulent stresses, including the Maxwell, Reynolds, and Faraday tensors, together with the behavior of large-scale velocity and magnetic fields, in order to understand the sustaining mechanisms of MRI turbulence without any external driving force. To accomplish this, we employ various high-order closure schemes. The three-point correlators are closed using a statistical closure model inspired by the CE2.5 approximation, while a two-scale approach is utilized to model second-order correlators involving the spatial gradient of a fluctuating field. Our principal findings can be summarized as follows: (1) The outward transport of angular-momentum occurs through positive total stress, W̅_xy = R̅_xy - M̅_xy, where M̅_xy<0 and R̅_xy>0. The dominant contribution to the total stress arises from the correlated magnetic fluctuations, rather than from their kinetic counterparts, i.e., -M̅_xy > R̅_xy, as expected. The generation process of these stresses involves intricate interactions involving shear, rotation, correlators associated with mean fields, and nonlinear terms. A schematic overview of the findings is summarized in fig:mri_transport. The stretching of M̅_xx through shear gives rise to M̅_xy, which, is further stretched by shear to produce M̅_yy. The large-scale magnetic field, predominantly B̅_y acts in conjunction with the correlator ⟨ b_x ∂_y u_x ⟩, leading to the generation of M̅_xx (which is essentially the tangling of large-scale field leading to small-scale fields). Regarding the Reynolds stress, the Coriolis force is responsible for generating R̅_xx from R̅_xy, and R̅_xy from R̅_yy. Interestingly, the nonlinear interactions between M̅_yy and R̅_yy via three-point terms contribute to the formation of R̅_yy from M̅_yy. Another significant source term for R̅_yy is the term proportional to the radial gradient of the mean azimuthal magnetic field. Therefore, the turbulent transport critically depends on the presence of large-scale magnetic fields. (2) For the large-scale magnetic field dynamo, we analyzed the individual terms in the mean field induction equation using both x–y and y–z averaging and determined their contributions to the generation of mean fields. Our findings align well with those obtained from DNS <cit.>. With x–y averaging, the azimuthal EMF generates the radial field, which, in turn, drives the azimuthal field through the Ω-effect. The radial EMF exhibits a sink effect, resulting in a decrease in the energy of the azimuthal field. In the case of y–z averaging, the large-scale fields originate entirely from the respective EMF. Specifically, the azimuthal field arises from the radial variation of the vertical EMF, while the vertical field emerges from the radial variation of the azimuthal EMF. (3) To identify the relevant dynamo mechanisms, we constructed the EMF for an MRI-driven system. The EMF is expressed as a linear combination of terms proportional to mean magnetic fields, the gradient of mean magnetic fields, the gradient of mean velocity fields, and a nonlinear term. The proportionality coefficients depend on shear, rotation, and statistical correlators associated with fluctuating fields. Importantly, this EMF expression arises naturally from our model rather than being an ansatz. By analyzing the general EMF expression, we identify two crucial dynamo mechanisms—the rotation-shear-current effect and the rotation-shear-vorticity effect—that are responsible for generating the radial and vertical magnetic fields, respectively. We have provided explicit expressions of the corresponding turbulent transport coefficients, in the nonperturbative limit. Notably, both mechanisms rely on the presence of large-scale vorticity dynamo. It is important to note that both the kinetic and magnetic components of the rotation-shear-current effect have favorable signs for driving a dynamo mechanism. A schematic overview of the rotation-shear-current effect is presented in fig:mri_rsc. § ACKNOWLEDGEMENTS TM and PB gratefully acknowledge Dr. Matthias Rheinhardt for his valuable guidance and support during the initial stages of a special module development in the framework of the Pencil Code. We are also thankful to Prof. Kandaswamy Subramanian for his insightful comments and suggestions. TM would like to thank Sahel Dey for providing an overview regarding the Pencil Code. The simulations were performed on the ICTS HPC clusters Contra and Sonic, and we express our gratitude to Dr. Prayush Kumar for granting access to the Sonic cluster. We acknowledge project RTI4001 of the Dept. of Atomic Energy, Govt. of India. § THE ELECTROMOTIVE FORCE In this section, we provide a comprehensive derivation of the electromotive force (EMF). We start by presenting the evolution equations for all the components of the Faraday tensor: 𝒟_t F̅_xx = 2ΩF̅_yx + ( F̅_xk∂_kU̅_x - F̅_kx∂_kU̅_x ) + B̅_k [ ⟨ u_x∂_k u_x ⟩ + ⟨ b_x∂_k b_x ⟩/μ_0 ρ] + 1/ρ( M̅_xk∂_kB̅_x-R̅_xk∂_kB̅_x ) + 𝒯̅^F_xx, 𝒟_t F̅_xy = - qΩF̅_xx + 2ΩF̅_yy + (F̅_xk∂_kU̅_y - F̅_ky∂_kU̅_x ) + B̅_k [ ⟨ u_x∂_k u_y ⟩ + ⟨ b_y∂_k b_x ⟩/μ_0 ρ] + 1/ρ( M̅_yk∂_kB̅_x-R̅_xk∂_kB̅_y ) + 𝒯̅^F_xy, 𝒟_t F̅_xz = 2ΩF̅_yz + (F̅_xk∂_kU̅_z - F̅_kz∂_kU̅_x ) + B̅_k [ ⟨ u_x∂_k u_z ⟩ + ⟨ b_z∂_k b_x ⟩/μ_0 ρ] + 1/ρ( M̅_zk∂_kB̅_x-R̅_xk∂_kB̅_z ) + 𝒯̅^F_xz, 𝒟_t F̅_yx = -(2-q) ΩF̅_xx + (F̅_yk∂_kU̅_x - F̅_kx∂_kU̅_y ) + B̅_k [ ⟨ u_y∂_k u_x ⟩ + ⟨ b_x∂_k b_y ⟩/μ_0 ρ] + 1/ρ( M̅_xk∂_kB̅_y-R̅_yk∂_kB̅_x ) + 𝒯̅^F_yx , 𝒟_t F̅_yy = -(2-q) ΩF̅_xy - qΩF̅_yx + (F̅_yk∂_kU̅_y - F̅_ky∂_kU̅_y ) + B̅_k [ ⟨ u_y∂_k u_y ⟩ + ⟨ b_y∂_k b_y ⟩/μ_0 ρ] + 1/ρ( M̅_yk∂_kB̅_y-R̅_yk∂_kB̅_y ) + 𝒯̅^F_yy, 𝒟_t F̅_yz = -(2-q)ΩF̅_xz + (F̅_yk∂_kU̅_z - F̅_kz∂_kU̅_y ) + B̅_k [ ⟨ u_y∂_k u_z ⟩ + ⟨ b_z∂_k b_y ⟩/μ_0 ρ] + 1/ρ( M̅_zk∂_kB̅_y-R̅_yk∂_kB̅_z ) + 𝒯̅^F_yz, 𝒟_t F̅_zx = (F̅_zk∂_kU̅_x - F̅_kx∂_kU̅_z ) + B̅_k [ ⟨ u_z∂_k u_x ⟩ + ⟨ b_x∂_k b_z ⟩/μ_0 ρ] + 1/ρ( M̅_xk∂_kB̅_z-R̅_zk∂_kB̅_x ) + 𝒯̅^F_zx , 𝒟_t F̅_zy = - qΩF̅_zx + (F̅_zk∂_kU̅_y - F̅_ky∂_kU̅_z ) + B̅_k [ ⟨ u_z∂_k u_y ⟩ + ⟨ b_y∂_k b_z ⟩/μ_0 ρ] + 1/ρ( M̅_yk∂_kB̅_z-R̅_zk∂_kB̅_y ) + 𝒯̅^F_zy , 𝒟_t F̅_zz = (F̅_zk∂_kU̅_z - F̅_kz∂_kU̅_z ) + B̅_k [ ⟨ u_z∂_k u_z ⟩ + ⟨ b_z∂_k b_z ⟩/μ_0 ρ] + 1/ρ( M̅_zk∂_kB̅_z-R̅_zk∂_kB̅_z ) + 𝒯̅^F_zz, where, we have absorbed the advection terms within the operator 𝒟_t ≡∂_t - qΩ x ∂_y + U̅_k ∂_k. The right-hand side of the equations consists of five different types of terms: those proportional to the gradients of the mean velocity (∂_k U̅_i), terms proportional to the mean magnetic field (B̅_i), terms proportional to the gradients of the mean magnetic field (∂_k B̅_i), nonlinear three-point terms (𝒯̅^F_ij), and interaction terms arising from the Coriolis force and background shear. It is worth noting that our approach deviates from existing studies as we utilize such interaction terms to construct the EMF. Our primary focus is on determining the azimuthal component of the EMF, denoted as ℰ̅_y = (F̅_zx - F̅_xz). To derive ℰ̅_y, we multiply eq:F_yz by q and eq:F_zy by (q-2), and subsequently combine them. After conducting some algebra, we arrive at the following resulting equation: q 𝒟_t F̅_yz + (q-2) 𝒟_t F̅_zy = q(2-q) Ω (F̅_zx - F̅_xz) + { qF̅_yk +(2-q)F̅_ky}∂_k U̅_z - { qF̅_kz +(2-q)F̅_zk}∂_k U̅_y + 1/ρ{qM̅_zk +(2-q)R̅_zk}∂_k B̅_y - 1/ρ{qR̅_yk +(2-q)M̅_yk}∂_k B̅_z + B̅_k { q (⟨ u_y ∂_k u_z ⟩ + ⟨ b_z ∂_k b_y ⟩/μ_0 ρ) - (2-q) (⟨ u_z ∂_k u_y ⟩ + ⟨ b_y ∂_k b_z ⟩/μ_0 ρ) } + 𝒯̅_y , where, 𝒯̅_y = q 𝒯̅^F_yz - (2-q) 𝒯̅^F_zy represents the contribution arising from the three-point terms. By further algebraic manipulation, we obtain the expression for ℰ̅_y as follows: (F̅_zx - F̅_xz) = -1/q(2-q)Ω[ { - q 𝒟_t F̅_yz + (2-q) 𝒟_t F̅_zy} + { qF̅_yk +(2-q)F̅_ky}∂_k U̅_z - { qF̅_kz +(2-q)F̅_zk}∂_k U̅_y + 1/ρ{qM̅_zk +(2-q)R̅_zk}∂_k B̅_y - 1/ρ{qR̅_yk +(2-q)M̅_yk}∂_k B̅_z + B̅_k { q (⟨ u_y ∂_k u_z ⟩ + ⟨ b_z ∂_k b_y ⟩/μ_0 ρ) - (2-q) (⟨ u_z ∂_k u_y ⟩ + ⟨ b_y ∂_k b_z ⟩/μ_0 ρ) } + 𝒯̅_y] . Similarly, to derive ℰ̅_z, we combine eq:F_xx and eq:F_yy. After conducting some algebra, we arrive at the following resulting equation: ℰ̅_z = (F̅_xy - F̅_yx) = 1/(2-q)Ω[ - (𝒟_t F̅_xx + 𝒟_t F̅_yy) + ( F̅_xk - F̅_kx) ∂_k U̅_x + ( F̅_yk - F̅_ky) ∂_k U̅_y + 1/ρ( M̅_xk - R̅_xk) ∂_k B̅_x + 1/ρ( M̅_yk - R̅_yk) ∂_k B̅_y + B̅_k {⟨ u_x ∂_k u_x ⟩ + ⟨ u_y ∂_k u_y ⟩ + 1/μ_0 ρ(⟨ b_x ∂_k b_x ⟩ + ⟨ b_y ∂_k b_y ⟩) } + 𝒯̅_z ] , where, 𝒯̅_z = 𝒯̅^F_xx + 𝒯̅^F_yy represents the contribution arising from the three-point terms. § THE STATISTICAL CLOSURE MODEL FOR THREE-POINT CORRELATIONS The three-point correlation term that appeared in the evolution equation for the Maxwell stress is given by 𝒯̅^M_ij = ⟨ b_i b_k ∂_k u_j + b_j b_k ∂_k u_i - u_k ∂_k (b_i b_j) ⟩ . Due to the presence of several correlations between three fluctuating quantities and the involvement of spatial derivatives, applying the CE2.5 closure model to this term becomes extremely challenging. Therefore, we adopt a similar approach to the CE2.5 model, along with the mixing length concept, to express the three-point correlators in terms of two-point correlators. The procedure involves the following steps: First, we neglect terms involving mean quantities in the equations for the fluctuating velocity and magnetic fields. It is important to note that the pressure fluctuation is also neglected in this specific analysis, and the pressure-strain nonlinearity is treated separately. Thus, the contribution of the nonlinear terms to the generation of fluctuating velocity and magnetic fields can be estimated as: u_i ≃τ∂_j (M_ij - R_ij) = τ( b_j ∂_j b_i - u_j ∂_j u_i ), and, b_i ≃τ∂_j (F_ij - F_ji) = τ( b_j ∂_j u_i - u_j ∂_j b_i ) , where τ represents the correlation time scale of the turbulence, and we consider it to be ∼ 1/Ω in the context of rotating disk turbulence. Second, by selectively substituting these fluctuating quantities, we express the three-point correlators in terms of four-point terms: 𝒯̅^M_ij = τ⟨ (b_m ∂_m u_i - u_m ∂_m b_i ) b_k ∂_k u_j + (b_m ∂_m u_j - u_m ∂_m b_j) b_k ∂_k u_i - (b_m ∂_m b_k - u_m ∂_m u_k) ∂_k (b_i b_j) ⟩ . To simplify our analysis, we introduce length scales to replace two spatial derivatives present in the fourth-order correlator. One derivative, originally appearing in the exact expression for the three-point term (Eq. <ref>), is replaced by the length scale L, which represents either the vertical length of the simulation box or the disk scale height, typically of the order c_s / Ω. The other spatial derivative arises from the fluctuating fields (Eqs. <ref> and <ref>). To replace this derivative, we utilize a correlation length scale that accounts for the distance an eddy can traverse during the correlation time τ∼ 1/Ω. We adopt three different correlation lengths for the gas, magnetic, and cross fields, denoted as l_G ∼√(R)/ Ω, l_M ∼√(M)/ Ω, and l_F ∼√(MR)/ Ω, respectively. By assuming approximate randomness, a fourth-order correlator can be reduced to a product of second-order terms based on the contraction of indices with those of the derivative indices. This reduction allows us to rewrite eq:threep_M_step1 in a simplified form, yielding: 𝒯̅^M_ij = τ/L[ ( ⟨ b_m b_m ⟩/l_M⟨ u_i u_j ⟩ - ⟨ u_m b_m ⟩/l_F⟨ b_i u_j ⟩) + ( ⟨ b_m b_m ⟩/l_M⟨ u_j u_i ⟩ - ⟨ u_m b_m ⟩/l_F⟨ b_j u_i ⟩) - (⟨ b_m b_m ⟩/l_M⟨ b_i b_j + b_j b_i ⟩ - ⟨ u_m u_m ⟩/l_R⟨ b_i b_j + b_j b_i ⟩) ] . Alternatively, we can express it as: 𝒯̅^M_ij = τ/L[ 2⟨ b_m b_m ⟩/l_M⟨ u_i u_j ⟩ - 2⟨ b_m b_m ⟩/l_M⟨ b_i b_j ⟩ + 2⟨ u_m u_m ⟩/l_R⟨ b_i b_j ⟩ - ⟨ u_m b_m ⟩/l_F⟨ u_i b_j + u_j b_i ⟩] . Finally, we introduce positive dimensionless constants c_i to account for the propotionality constant in the previous approximations. These constants are typically of the order of unity. The final expression for the nonlinear three-point term becomes: 𝒯̅^M_ij = 1/L[2c_1 √(M̅)R̅_ij - 2c_2 √(M̅)M̅_ij - 2c_3 √(R̅)M̅_ij - c_4 √(F̅)( F̅_ij+F̅_ji)] . Note that, in the derivation from eq:threep_M_step2 to eq:threep_M_step3, the sign of the term associated with the constant c_3 has been reversed based on the physical arguments discussed in the main text. We follow similar procedures to derive closure models for the nonlinear three-point terms 𝒯̅^R_ij and 𝒯̅^F_ij. § CLOSURE APPROXIMATION FOR SECOND-ORDER CORRELATORS INVOLVING THE SPATIAL GRADIENT OF A FLUCTUATING FIELD: A TWO-SCALE APPROACH In this section, we employ the two-scale approach <cit.> to determine the second-order correlators which involve the spatial gradient of a fluctuating field. These terms appear on the right-hand side of the stress equations (Eqs. <ref>–<ref>). Specifically, we focus on terms such as B̅_m ⟨ u_i ∂_m u_j ⟩, B̅_m ⟨ u_i ∂_m b_j ⟩, amongst others. To make progress in our analysis, it is necessary to find a way to estimate or “close" these terms. To begin, let us derive Y̅_ij = B̅_m ⟨ u_i ∂_m u_j ⟩. We consider the correlation tensor ⟨ u_i ( x_1 ) u_j ( x_2) ⟩ of two vector fields u_i and u_j, where x_1 and x_2 denote two points in space but both fields are taken at the same time. By employing Fourier transformation and the two-scale approach, we can express the correlation function as follows: ⟨ u_i ( x_1 ) u_j ( x_2) ⟩ = ∫⟨û_i ( k_1) û_j ( k_2) ⟩exp{i( k_1 · x_1 + k_2 · x_2) } d^3 k_1 d^3 k_2 = ∫ R_ij( k, X ) exp(i k· x) d^3 k , where, R_ij( k, X ) = ∫⟨û_i ( k + K / 2 ) û_j( - k + K / 2 ) ⟩exp(i K· X) d^3 K , and X = ( x_1 + x_2) / 2 , x = x_1 - x_2, K = k_1 + k_2, k = ( k_1 - k_2) / 2. Here, X and K correspond to the large scales, while x and k correspond to the small scales. We introduce the correlation tensors for velocity and magnetic fluctuations, R_ij( k, X), M_ij( k, X), and F_ij( k, X), defined as: R_ij( k, X ) = Φ ( û_i, û_j; k, X ) = ∫⟨û_i ( k + K / 2 ) û_j( - k + K / 2 ) ⟩exp(i K· X) d^3 K , M_ij( k, X) = Φ ( b̂_i, b̂_j; k, X ) = ∫⟨b̂_i ( k + K / 2) b̂_j( - k + K / 2 ) ⟩exp(i K· X) d^3 K , F_ij( k, X) = Φ ( û_i, b̂_j; k, X ) = ∫⟨û_i ( k + K / 2) b̂_j( - k + K / 2 ) ⟩exp(i K· X) d^3 K . We want to compute Y̅_ij (x=0) = ∫ Y_ij( k, X) d^3 k. From the definition of Fourier integrals, we can write the (B̅·∇) u term in Fourier space, as Ŝ_i( u,B̅; k) = i k_p∫û_i ( k- Q) B̂̅̂_p ( Q) d^3 Q . ∴ Y_ij( k, X) = ∫⟨û_i( k + K / 2 ) Ŝ_j( u, B̅; - k + K/2) ⟩exp(i K · X) d^3 K = i ∫ (-k_p + K_p / 2) ⟨û_i ( k + K/2 ) û_j(- k + K/ 2- Q) ⟩B̂̅̂_p( Q) exp(i K · X) d^3 K d^3 Q . Now, we change the integration variable K into K- Q , denoted by K'. In this way, and using Q_p B̂̅̂_p = 0 , we obtain Y_ij( k, X) = i ∫ (-k_p + K'_p / 2) ⟨û_i ( k + Q/ 2 + K'/ 2) û_j(- k - Q/ 2 + K'/ 2) ⟩B̂̅̂_p( Q) exp[i( K' + Q) · X] d^3 K' d^3 Q . Using the definition of R_ij(k, X), given in equations (<ref>), we have Y_ij( k, X) = ∫[-i k_p R_ij( k + Q/2, X) + 1/2( ∂ R_ij( k + Q / 2, X) ∂ X_p) ] B̂̅̂_p( Q) exp(i Q · X) d^3 Q . The Taylor expansion (since | Q|<< | k|) gives R_ij( k + Q/ 2, X) ≃ R_ij( k, X) + 1/2(∂ R_ij( k, X) ∂ k_l) Q_l + O( Q^2) . This yields Y_ij( k, X) = ∫[-i k_p{ R_ij( k, X) + 1/2(∂ R_ij( k, X) ∂ k_l) Q_l } + 1/2( ∂ R_ij( k, X) ∂ X_p) ] B̂̅̂_p( Q) exp(i Q · X) d^3 Q . ∴ Y_ij( k, X) ≃[-i( k ·B̅) + 1/2 (B̅·∇) ] R_ij( k, X) - k_p R_ijl ( k, X) B̅_p,l , where R_ijl = 1/2∂ R_ij / ∂ k_l , B̅_i,j = ∇_jB̅_i , with ∇ stands for ∂ / ∂ X. Finally, we can write Y̅_ij (x=0) = ∫ Y_ij( k, X) d^3 k ≃ - B̅_m ∫ ik_m R_ij(k) d^3 k + 1/2 (B̅·∇) R̅_ij. The calculation can be simplified by excluding B̅ from the Fourier integrals. Consider the computation of W̅_ij = B̅_m ⟨ u_i ∂_m b_j ⟩ = B̅_m W̅_ijm, where W̅_ijm = ⟨ u_i ∂_m b_j ⟩. So, we have W_ijm( k, X) = i ∫ (-k_m + K_m / 2) ⟨û_i ( k + K/2 ) b̂_j(- k + K/ 2) ⟩exp(i K · X) d^3 K , = -ik_m F_ij( k, X) + 1/2∇_m F_ij( k, X). ∴W̅_ij (x=0) = B̅_m ∫ W_ijm( k, X) d^3 k = - B̅_m ∫ ik_m F_ij(k) d^3 k + 1/2 (B̅·∇) F̅_ij. Approximating the first term on the right-hand side of eq:Wij as -Tr(B̅) l^-1F̅_ij, with l^-1 = s (Ω / √(B̅^2 / μ_0 ρ)) and s being a constant, we arrive at: W̅_ij = B̅_m ⟨ u_i ∂_m b_j ⟩ = - Tr(B̅) l^-1F̅_ij + 1/2 (B̅·∇) F̅_ij. 71 fxundefined [1] ifx#1 fnum [1] #1firstoftwo secondoftwo fx [1] #1firstoftwo secondoftwo noop [0]secondoftwo ref[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0] rl [1]href #1 @bib@innerbibempty [Balbus and Hawley(1998)]1998RvMP...70....1B author author S. A. Balbus and author J. F. Hawley, title title Instability, turbulence, and enhanced transport in accretion disks, https://doi.org/10.1103/RevModPhys.70.1 journal journal Reviews of Modern Physics volume 70, pages 1 (year 1998)NoStop [Balbus and Hawley(1992)]1992ApJ...400..610B author author S. A. Balbus and author J. F. Hawley, title title A Powerful Local Shear Instability in Weakly Magnetized Disks. IV. Nonaxisymmetric Perturbations, https://doi.org/10.1086/172022 journal journal volume 400, pages 610 (year 1992)NoStop [Hawley et al.(1995)Hawley, Gammie, and Balbus]1995ApJ...440..742H author author J. F. Hawley, author C. F. Gammie, and author S. A. Balbus, title title Local Three-dimensional Magnetohydrodynamic Simulations of Accretion Disks, https://doi.org/10.1086/175311 journal journal volume 440, pages 742 (year 1995)NoStop [Bhat et al.(2017)Bhat, Ebrahimi, Blackman, and Subramanian]2017MNRAS.472.2569B author author P. Bhat, author F. Ebrahimi, author E. G. Blackman, and author K. Subramanian,title title Evolution of the magnetorotational instability on initially tangled magnetic fields, https://doi.org/10.1093/mnras/stx1989 journal journal volume 472, pages 2569 (year 2017)NoStop [Brandenburg et al.(1995)Brandenburg, Nordlund, Stein, and Torkelsson]1995ApJ...446..741B author author A. Brandenburg, author A. Nordlund, author R. F. Stein, and author U. Torkelsson, title title Dynamo-generated Turbulence and Large-Scale Magnetic Fields in a Keplerian Shear Flow,https://doi.org/10.1086/175831 journal journal volume 446, pages 741 (year 1995)NoStop [Hawley et al.(1996)Hawley, Gammie, and Balbus]1996ApJ...464..690H author author J. F. Hawley, author C. F. Gammie, and author S. A. Balbus, title title Local Three-dimensional Simulations of an Accretion Disk Hydromagnetic Dynamo, https://doi.org/10.1086/177356 journal journal volume 464, pages 690 (year 1996)NoStop [Lesur and Ogilvie(2008)]2008A A...488..451L author author G. Lesur and author G. I. Ogilvie, title title On self-sustained dynamo cycles in accretion discs, https://doi.org/10.1051/0004-6361:200810152 journal journal volume 488, pages 451 (year 2008)NoStop [Gressel(2010)]2010MNRAS.405...41G author author O. Gressel, title title A mean-field approach to the propagation of field patterns in stratified magnetorotational turbulence, https://doi.org/10.1111/j.1365-2966.2010.16440.x journal journal volume 405, pages 41 (year 2010)NoStop [Davis et al.(2010)Davis, Stone, and Pessah]2010ApJ...713...52D author author S. W. Davis, author J. M. Stone, and author M. E. Pessah, title title Sustained Magnetorotational Turbulence in Local Simulations of Stratified Disks with Zero Net Magnetic Flux, https://doi.org/10.1088/0004-637X/713/1/52 journal journal volume 713, pages 52 (year 2010)NoStop [Shi et al.(2016)Shi, Stone, and Huang]2016MNRAS.456.2273S author author J.-M. Shi, author J. M. Stone, and author C. X. Huang,title title Saturation of the magnetorotational instability in the unstratified shearing box with zero net flux: convergence in taller boxes, https://doi.org/10.1093/mnras/stv2815 journal journal volume 456, pages 2273 (year 2016)NoStop [Bhat et al.(2016)Bhat, Ebrahimi, and Blackman]2016MNRAS.462..818B author author P. Bhat, author F. Ebrahimi, and author E. G. Blackman,title title Large-scale dynamo action precedes turbulence in shearing box simulations of the magnetorotational instability, https://doi.org/10.1093/mnras/stw1619 journal journal volume 462,pages 818 (year 2016)NoStop [Zier and Springel(2022)]2022MNRAS.517.2639Z author author O. Zier and author V. Springel, title title Simulating the magnetorotational instability on a moving mesh with the shearing box approximation, https://doi.org/10.1093/mnras/stac2831 journal journal volume 517, pages 2639 (year 2022)NoStop [O'Neill et al.(2011)O'Neill, Reynolds, Miller, and Sorathia]2011ApJ...736..107O author author S. M. O'Neill, author C. S. Reynolds, author M. C. Miller, and author K. A. Sorathia, title title Low-frequency Oscillations in Global Simulations of Black Hole Accretion, https://doi.org/10.1088/0004-637X/736/2/107 journal journal volume 736, eid 107 (year 2011)NoStop [Beckwith et al.(2011)Beckwith, Armitage, and Simon]2011MNRAS.416..361B author author K. Beckwith, author P. J. Armitage, and author J. B. Simon, title title Turbulence in global simulations of magnetized thin accretion discs, https://doi.org/10.1111/j.1365-2966.2011.19043.x journal journal volume 416, pages 361 (year 2011)NoStop [Hawley et al.(2013)Hawley, Richers, Guan, and Krolik]2013ApJ...772..102H author author J. F. Hawley, author S. A. Richers, author X. Guan, and author J. H. Krolik,title title Testing Convergence for Global Accretion Disks, https://doi.org/10.1088/0004-637X/772/2/102 journal journal volume 772, eid 102 (year 2013)NoStop [Parkin and Bicknell(2013)]2013MNRAS.435.2281P author author E. R. Parkin and author G. V. Bicknell, title title Global simulations of magnetorotational turbulence - I. Convergence and the quasi-steady state,https://doi.org/10.1093/mnras/stt1450 journal journal volume 435, pages 2281 (year 2013)NoStop [Hogg and Reynolds(2018)]2018ApJ...861...24H author author J. D. Hogg and author C. S. Reynolds, title title The Influence of Accretion Disk Thickness on the Large-scale Magnetic Dynamo, https://doi.org/10.3847/1538-4357/aac439 journal journal volume 861, eid 24 (year 2018)NoStop [Dhang et al.(2020)Dhang, Bendre, Sharma, and Subramanian]2020MNRAS.494.4854D author author P. Dhang, author A. Bendre, author P. Sharma, and author K. Subramanian,title title Characterizing the dynamo in a radiatively inefficient accretion flow, https://doi.org/10.1093/mnras/staa996 journal journal volume 494, pages 4854 (year 2020)NoStop [Kato and Yoshizawa(1995)]1995PASJ...47..629K author author S. Kato and author A. Yoshizawa, title title A Model of Hydromagnetic Turbulence in Accretion Disks. II, @noop journal journal volume 47, pages 629 (year 1995)NoStop [Ogilvie(2003)]2003MNRAS.340..969O author author G. I. Ogilvie, title title On the dynamics of magnetorotational turbulent stresses, https://doi.org/10.1046/j.1365-8711.2003.06359.x journal journal volume 340, pages 969 (year 2003)NoStop [Pessah et al.(2006)Pessah, Chan, and Psaltis]2006PhRvL..97v1103P author author M. E. Pessah, author C.-K. Chan, and author D. Psaltis, title title Local Model for Angular-Momentum Transport in Accretion Disks Driven by the Magnetorotational Instability, https://doi.org/10.1103/PhysRevLett.97.221103 journal journal volume 97, eid 221103 (year 2006)NoStop [Moffatt(1978)]1978mfge.book.....M author author H. K. Moffatt, @noop title Magnetic field generation in electrically conducting fluids (publisher Cambridge University Press, Cambridge, England, year 1978)NoStop [Krause and Raedler(1980)]1980opp..bookR....K author author F. Krause and author K. H. Raedler, @noop title Mean-field magnetohydrodynamics and dynamo theory (publisher Pergamon Press, Oxford, year 1980)NoStop [Brandenburg and Subramanian(2005)]2005PhR...417....1B author author A. Brandenburg and author K. Subramanian, title title Astrophysical magnetic fields and nonlinear dynamo theory, https://doi.org/10.1016/j.physrep.2005.06.005 journal journal volume 417, pages 1 (year 2005)NoStop [Parker(1979)]1979cmft.book.....P author author E. N. Parker, @noop title Cosmical magnetic fields. Their origin and their activity (publisher Clarendon Press, Oxford, year 1979)NoStop [Gressel and Pessah(2015)]2015ApJ...810...59G author author O. Gressel and author M. E. Pessah, title title Characterizing the Mean-field Dynamo in Turbulent Accretion Disks, https://doi.org/10.1088/0004-637X/810/1/59 journal journal volume 810, eid 59 (year 2015)NoStop [Wissing et al.(2022)Wissing, Shen, Wadsley, and Quinn]2022A A...659A..91W author author R. Wissing, author S. Shen, author J. Wadsley, and author T. Quinn, title title Magnetorotational instability with smoothed particle hydrodynamics, https://doi.org/10.1051/0004-6361/202141206 journal journal volume 659, eid A91 (year 2022)NoStop [Hughes and Proctor(2009)]2009PhRvL.102d4501H author author D. W. Hughes and author M. R. E. Proctor, title title Large-Scale Dynamo Action Driven by Velocity Shear and Rotating Convection, https://doi.org/10.1103/PhysRevLett.102.044501 journal journal volume 102, eid 044501 (year 2009)NoStop [Yousef et al.(2008)Yousef, Heinemann, Schekochihin, Kleeorin, Rogachevskii, Iskakov, Cowley, and McWilliams]2008PhRvL.100r4501Y author author T. A. Yousef, author T. Heinemann, author A. A. Schekochihin, author N. Kleeorin, author I. Rogachevskii, author A. B. Iskakov, author S. C. Cowley, and author J. C. McWilliams, title title Generation of Magnetic Field by Combined Action of Turbulence and Shear, https://doi.org/10.1103/PhysRevLett.100.184501 journal journal volume 100, eid 184501 (year 2008)NoStop [Vishniac and Brandenburg(1997)]1997ApJ...475..263V author author E. T. Vishniac and author A. Brandenburg, title title An Incoherent - Dynamo in Accretion Disks,https://doi.org/10.1086/303504 journal journal volume 475, pages 263 (year 1997)NoStop [Silant'ev(2000)]2000A A...364..339S author author N. A. Silant'ev, title title Magnetic dynamo due to turbulent helicity fluctuations, @noop journal journal volume 364, pages 339 (year 2000)NoStop [Proctor(2007)]2007MNRAS.382L..39P author author M. R. E. Proctor, title title Effects of fluctuation on dynamo models, https://doi.org/10.1111/j.1745-3933.2007.00385.x journal journal volume 382, pages L39 (year 2007)NoStop [Heinemann et al.(2011)Heinemann, McWilliams, and Schekochihin]2011PhRvL.107y5004H author author T. Heinemann, author J. C. McWilliams, and author A. A. Schekochihin, title title Large-Scale Magnetic Field Generation by Randomly Forced Shearing Waves, https://doi.org/10.1103/PhysRevLett.107.255004 journal journal volume 107, eid 255004 (year 2011)NoStop [Mitra and Brandenburg(2012)]2012MNRAS.420.2170M author author D. Mitra and author A. Brandenburg, title title Scaling and intermittency in incoherent -shear dynamo, https://doi.org/10.1111/j.1365-2966.2011.20190.x journal journal volume 420, pages 2170 (year 2012)NoStop [Sridhar and Singh(2014)]2014MNRAS.445.3770S author author S. Sridhar and author N. K. Singh, title title Large-scale dynamo action due to fluctuations in a linear shear flow,https://doi.org/10.1093/mnras/stu1981 journal journal volume 445, pages 3770 (year 2014)NoStop [Rogachevskii and Kleeorin(2003)]2003PhRvE..68c6301R author author I. Rogachevskii and author N. Kleeorin, title title Electromotive force and large-scale magnetic dynamo in a turbulent flow with a mean shear,https://doi.org/10.1103/PhysRevE.68.036301 journal journal volume 68, eid 036301 (year 2003)NoStop [Rogachevskii and Kleeorin(2004)]2004PhRvE..70d6310R author author I. Rogachevskii and author N. Kleeorin, title title Nonlinear theory of a “shear-current” effect and mean-field magnetic dynamos, https://doi.org/10.1103/PhysRevE.70.046310 journal journal volume 70, eid 046310 (year 2004)NoStop [Squire and Bhattacharjee(2015a)]2015PhRvL.115q5003S author author J. Squire and author A. Bhattacharjee, title title Generation of Large-Scale Magnetic Fields by Small-Scale Dynamo in Shear Flows, https://doi.org/10.1103/PhysRevLett.115.175003 journal journal volume 115, eid 175003 (year 2015a)NoStop [Yoshizawa and Yokoi(1993)]1993ApJ...407..540Y author author A. Yoshizawa and author N. Yokoi, title title Turbulent Magnetohydrodynamic Dynamo for Accretion Disks Using the Cross-Helicity Effect, https://doi.org/10.1086/172535 journal journal volume 407, pages 540 (year 1993)NoStop [Brandenburg and Urpin(1998)]1998A A...332L..41B author author A. Brandenburg and author V. Urpin, title title Magnetic fields in young galaxies due to the cross-helicity effect, @noop journal journal volume 332, pages L41 (year 1998)NoStop [Blackman(2000)]2000ApJ...529..138B author author E. G. Blackman, title title Mean Magnetic Field Generation in Sheared Rotators, https://doi.org/10.1086/308278 journal journal volume 529, pages 138 (year 2000)NoStop [Yokoi(2013)]2013GApFD.107..114Y author author N. Yokoi, title title Cross helicity and related dynamo, https://doi.org/10.1080/03091929.2012.754022 journal journal Geophysical and Astrophysical Fluid Dynamics volume 107, pages 114 (year 2013)NoStop [Rädler(1986)]1986AN....307...89R author author K. H. Rädler, title title Investigations of spherical kinematic mean-field dynamo models, https://doi.org/10.1002/asna.2113070205 journal journal Astronomische Nachrichten volume 307, pages 89 (year 1986)NoStop [Rincon et al.(2008)Rincon, Ogilvie, Proctor, and Cossu]2008AN....329..750R author author F. Rincon, author G. I. Ogilvie, author M. R. E. Proctor, and author C. Cossu, title title Subcritical dynamos in shear flows, https://doi.org/10.1002/asna.200811010 journal journal Astronomische Nachrichten volume 329, pages 750 (year 2008)NoStop [Rincon et al.(2007)Rincon, Ogilvie, and Cossu]2007A A...463..817R author author F. Rincon, author G. I. Ogilvie, and author C. Cossu, title title On self-sustaining processes in Rayleigh-stable rotating plane Couette flows and subcritical transition to turbulence in accretion disks, https://doi.org/10.1051/0004-6361:20066544 journal journal volume 463, pages 817 (year 2007)NoStop [Ebrahimi and Blackman(2016)]2016MNRAS.459.1422E author author F. Ebrahimi and author E. G. Blackman, title title Radially dependent large-scale dynamos in global cylindrical shear flows and the local cartesian limit, https://doi.org/10.1093/mnras/stw724 journal journal volume 459, pages 1422 (year 2016)NoStop [Gopalakrishnan and Subramanian(2023)]2023ApJ...943...66G author author K. Gopalakrishnan and author K. Subramanian, title title Magnetic Helicity Fluxes from Triple Correlators, https://doi.org/10.3847/1538-4357/aca808 journal journal volume 943, eid 66 (year 2023)NoStop [Blackman(2010)]2010AN....331..101B author author E. G. Blackman, title title Comparisons and connections between mean field dynamo theory and accretion disc theory,https://doi.org/10.1002/asna.200911304 journal journal Astronomische Nachrichten volume 331, pages 101 (year 2010)NoStop [Blackman and Nauman(2015)]2015JPlPh..81e3905B author author E. G. Blackman and author F. Nauman, title title Motivation and challenge to capture both large-scale and local transport in next generation accretion theory, https://doi.org/10.1017/S0022377815000999 journal journal Journal of Plasma Physics volume 81, eid 395810505 (year 2015)NoStop [Käpylä et al.(2022)Käpylä, Rheinhardt, and Brandenburg]2022ApJ...932....8K author author M. J. Käpylä, author M. Rheinhardt, and author A. Brandenburg, title title Compressible Test-field Method and Its Application to Shear Dynamos, https://doi.org/10.3847/1538-4357/ac5b78 journal journal volume 932, eid 8 (year 2022)NoStop [Bendre and Subramanian(2022)]2022MNRAS.511.4454B author author A. B. Bendre and author K. Subramanian, title title Non-locality of the turbulent electromotive force, https://doi.org/10.1093/mnras/stac339 journal journal volume 511, pages 4454 (year 2022)NoStop [Brandenburg et al.(2008)Brandenburg, Rädler, Rheinhardt, and Käpylä]2008ApJ...676..740B author author A. Brandenburg, author K. H. Rädler, author M. Rheinhardt, and author P. J. Käpylä, title title Magnetic Diffusivity Tensor and Dynamo Effects in Rotating and Shearing Turbulence,https://doi.org/10.1086/527373 journal journal volume 676, pages 740 (year 2008)NoStop [Li et al.(2021)Li, Marston, and Tobias]2021RSPSA.47710427L author author K. Li, author J. B. Marston, and author S. M. Tobias, title title Direct statistical simulation of low-order dynamosystems, https://doi.org/10.1098/rspa.2021.0427 journal journal Proceedings of the Royal Society of London Series A volume 477, eid 20210427 (year 2021)NoStop [Roberts and Soward(1975)]1975AN....296...49R author author P. H. Roberts and author A. M. Soward, title title A unified approach to mean field electrodynamics, https://doi.org/10.1002/asna.19752960202 journal journal Astronomische Nachrichten volume 296, pages 49 (year 1975)NoStop [Collaboration(2021)]2021ApJ...910L..13E author author E. H. T. Collaboration, title title First M87 Event Horizon Telescope Results. VIII. Magnetic Field Structure near The Event Horizon, https://doi.org/10.3847/2041-8213/abe4de journal journal volume 910, eid L13 (year 2021)NoStop [Brandenburg et al. (Pencil Code Collaboration)(2021)]2021JOSS....6.2807P author author A. Brandenburg et al. (Pencil Code Collaboration), title title The Pencil Code, a modular MPI code for partial differential equations and particles: multipurpose and multiuser-maintained, https://doi.org/10.21105/joss.02807 journal journal Journal of Open Source Software volume 6, eid 2807 (year 2021)NoStop [Marston et al.(2008)Marston, Conover, and Schneider]2008JAtS...65.1955M author author J. B. Marston, author E. Conover, and author T. Schneider, title title Statistics of an Unstable Barotropic Jet from a Cumulant Expansion, https://doi.org/10.1175/2007JAS2510.1 journal journal Journal of the Atmospheric Sciences volume 65, pages 1955 (year 2008)NoStop [Squire and Bhattacharjee(2015b)]2015PhRvL.114h5002S author author J. Squire and author A. Bhattacharjee, title title Statistical Simulation of the Magnetorotational Dynamo, https://doi.org/10.1103/PhysRevLett.114.085002 journal journal volume 114, eid 085002 (year 2015b)NoStop [Nakao(1997)]1997PASJ...49..659N author author Y. Nakao, title title Effects of Mean Magnetic Fields on Turbulence in Accretion Disks, https://doi.org/10.1093/pasj/49.6.659 journal journal volume 49, pages 659 (year 1997)NoStop [Pessah et al.(2008)Pessah, Chan, and Psaltis]2008MNRAS.383..683P author author M. E. Pessah, author C.-K. Chan, and author D. Psaltis, title title The fundamental difference between shear alpha viscosity and turbulent magnetorotational stresses, https://doi.org/10.1111/j.1365-2966.2007.12574.x journal journal volume 383, pages 683 (year 2008)NoStop [Liljeström et al.(2009)Liljeström, Korpi, Käpylä, Brandenburg, and Lyra]2009AN....330...92L author author A. J. Liljeström, author M. J. Korpi, author P. J. Käpylä, author A. Brandenburg, and author W. Lyra, title title Turbulent stresses as a function of shear rate in a local disk model, https://doi.org/10.1002/asna.200811131 journal journal Astronomische Nachrichten volume 330, pages 92 (year 2009)NoStop [Rädler and Brandenburg(2010)]2010AN....331...14R author author K. H. Rädler and author A. Brandenburg, title title Mean electromotive force proportional to mean flow in MHD turbulence, https://doi.org/10.1002/asna.200911313 journal journal Astronomische Nachrichten volume 331, pages 14 (year 2010)NoStop [Brandenburg(2018)]2018JPlPh..84d7304B author author A. Brandenburg, title title Advances in mean-field dynamo theory and applications to astrophysical turbulence,https://doi.org/10.1017/S0022377818000806 journal journal Journal of Plasma Physics volume 84, eid 735840404 (year 2018)NoStop [Rädler and Stepanov(2006)]2006PhRvE..73e6311R author author K.-H. Rädler and author R. Stepanov, title title Mean electromotive force due to turbulence of a conducting fluid in the presence of mean flow, https://doi.org/10.1103/PhysRevE.73.056311 journal journal volume 73, eid 056311 (year 2006)NoStop [Rüdiger and Kitchatinov(2006)]2006AN....327..298R author author G. Rüdiger and author L. L. Kitchatinov, title title Do mean-field dynamos in nonrotating turbulent shear-flows exist?, https://doi.org/10.1002/asna.200610527 journal journal Astronomische Nachrichten volume 327, pages 298 (year 2006)NoStop [Sridhar and Subramanian(2009)]2009PhRvE..80f6315S author author S. Sridhar and author K. Subramanian, title title Nonperturbative quasilinear approach to the shear dynamo problem, https://doi.org/10.1103/PhysRevE.80.066315 journal journal volume 80, eid 066315 (year 2009)NoStop [Singh and Sridhar(2011)]2011PhRvE..83e6309S author author N. K. Singh and author S. Sridhar, title title Transport coefficients for the shear dynamo problem at small Reynolds numbers, https://doi.org/10.1103/PhysRevE.83.056309 journal journal volume 83, eid 056309 (year 2011)NoStop [Squire and Bhattacharjee(2015c)]2015PhRvE..92e3101S author author J. Squire and author A. Bhattacharjee, title title Electromotive force due to magnetohydrodynamic fluctuations in sheared rotating turbulence, https://doi.org/10.1103/PhysRevE.92.053101 journal journal volume 92, eid 053101 (year 2015c)NoStop [Zhou and Blackman(2021)]2021MNRAS.507.5732Z author author H. Zhou and author E. G. Blackman, title title On the shear-current effect: toward understanding why theories and simulations have mutually and separately conflicted, https://doi.org/10.1093/mnras/stab2469 journal journal volume 507, pages 5732 (year 2021)NoStop [Squire and Bhattacharjee(2015d)]2015ApJ...813...52S author author J. Squire and author A. Bhattacharjee, title title Coherent Nonhelical Shear Dynamos Driven by Magnetic Fluctuations at Low Reynolds Numbers, https://doi.org/10.1088/0004-637X/813/1/52 journal journal volume 813, eid 52 (year 2015d)NoStop [Käpylä et al.(2020)Käpylä, Vizoso, Rheinhardt, Brandenburg, and Singh]2020ApJ...905..179K author author M. J. Käpylä, author J. Á. Vizoso, author M. Rheinhardt, author A. Brandenburg, and author N. K. Singh, title title On the Existence of Shear-current Effects in Magnetized Burgulence, https://doi.org/10.3847/1538-4357/abc1e8 journal journal volume 905, eid 179 (year 2020)NoStop
http://arxiv.org/abs/2307.01146v2
20230703163710
AVSegFormer: Audio-Visual Segmentation with Transformer
[ "Shengyi Gao", "Zhe Chen", "Guo Chen", "Wenhai Wang", "Tong Lu" ]
cs.CV
[ "cs.CV", "cs.LG", "cs.SD", "eess.AS" ]
^1Nanjing University ^2The Chinese University of Hong Kong 163 Xianlin Rd The combination of audio and vision has long been a topic of interest in the multi-modal community. Recently, a new audio-visual segmentation (AVS) task has been introduced, aiming to locate and segment the sounding objects in a given video. This task demands audio-driven pixel-level scene understanding for the first time, posing significant challenges. In this paper, we propose AVSegFormer, a novel framework for AVS tasks that leverages the transformer architecture. Specifically, we introduce audio queries and learnable queries into the transformer decoder, enabling the network to selectively attend to interested visual features. Besides, we present an audio-visual mixer, which can dynamically adjust visual features by amplifying relevant and suppressing irrelevant spatial channels. Additionally, we devise an intermediate mask loss to enhance the supervision of the decoder, encouraging the network to produce more accurate intermediate predictions. Extensive experiments demonstrate that AVSegFormer achieves state-of-the-art results on the AVS benchmark. The code is available at https://github.com/vvvb-github/AVSegFormerhttps://github.com/vvvb-github/AVSegFormer. <ccs2012> <concept> <concept_id>10010147.10010178.10010224.10010245.10010248</concept_id> <concept_desc>Computing methodologies Video segmentation</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10002951.10003317.10003371.10003386.10003388</concept_id> <concept_desc>Information systems Video search</concept_desc> <concept_significance>300</concept_significance> </concept> <concept> <concept_id>10010147.10010178.10010224.10010225.10010227</concept_id> <concept_desc>Computing methodologies Scene understanding</concept_desc> <concept_significance>100</concept_significance> </concept> </ccs2012> [500]Computing methodologies Video segmentation [300]Information systems Video search [100]Computing methodologies Scene understanding AVSegFormer: Audio-Visual Segmentation with Transformer Shengyi Gao^1, Zhe Chen^1, Guo Chen^1, Wenhai Wang^2, Tong Lu^1† August 1, 2023 ==================================================================== § INTRODUCTION Audio and vision are closely intertwined modalities that play crucial roles in the perception of the world. For example, we rely heavily on auditory and visual cues to comprehend and navigate our surroundings. These underlying connections motivated many audio-visual tasks, such as audio-visual correspondence <cit.>, audio-visual event localization <cit.>, audio-visual video parsing <cit.>, and sound source localization <cit.>. However, due to the lack of pixel-level annotations for these tasks, they are often limited to the frame/temporal level, which makes them become audio-informed image classification problems eventually. Recently, a novel audio-visual segmentation (AVS) <cit.> task has been proposed, which aims to segment sounding objects from video frames corresponding to a given audio. It includes three sub-tasks: single sound source segmentation (S4), multiple sound source segmentation (MS3), and audio-visual semantic segmentation (AVSS). Figure <ref> illustrates the objectives of the three sub-tasks. Compared to previous audio-visual tasks, the AVS task presents a greater challenge as it requires the network to not only locate the audible frames but also delineate the shape of the sounding objects <cit.>. This demands the alignment of multiple modalities and necessitates a detailed understanding of the scenarios. These unique characteristics of the AVS task make many existing methods <cit.> designed for other audio-visual tasks may be sub-optimal on AVS. Therefore, designing new methods tailored for AVS becomes necessary. For the brand-new AVS task, AVSBench <cit.> designed a strong baseline method that achieves state-of-the-art audio-visual segmentation performance. Figure <ref>(a) illustrates its network architecture, which incorporates a modality fusion module before the convolution-based decoder (, Semantic FPN <cit.>) to enable audio-visual segmentation. This design is simple yet effective, but some inherent flaws of convolution still limit it. (1) The effective receptive field of convolutions is relatively small. Even with a deep decoder, the audio feature still cannot capture long-range visual dependencies, which restricts its performance. (2) Convolution is an operator with static weights, which is difficult to provide different visual features conditioned by the input audio. To remedy these issues, we propose AVSegFormer, a novel framework for audio-visual segmentation with transformers. The brief architecture is shown in Figure <ref>(b). First, we introduce audio queries along with learnable queries into the segmentation decoder, enabling the network to selectively attend to interested visual features. Second, we present an audio-visual mixer, which can dynamically adjust visual features by amplifying relevant and suppressing irrelevant spatial channels, allowing visual features to adapt to diverse audio features. Third, an intermediate mask loss is designed to enhance the supervision of the decoder, which encourages the network to produce more accurate intermediate predictions and helps refine the final segmentation outputs. We evaluate AVSegFormer on the three sub-tasks of AVS, including S4, MS3, and AVSS, with widely-used backbones ResNet-50 <cit.> and PVTv2 <cit.>. Our experimental results demonstrate that AVSegFormer significantly outperforms existing state-of-the-art methods, such as LGVT <cit.>, SST <cit.>, iGAN <cit.>, and AVSBench baseline <cit.>. Specifically, AVSegFormer-R50 achieves 76.45, 49.53, and 24.93 mIoU on S4, MS3, and AVSS, surpassing the AVSBench-R50 by 3.66, 1.65, and 4.75 mIoU, respectively. Furthermore, using PVTv2 as the backbone, AVSegFormer yields consistently higher segmentation performance on all three sub-tasks, setting new state-of-the-art records of 82.06, 58.36, and 36.66 mIoU. Overall, our contributions to this work are four-fold. * We employ audio queries and learnable queries to selectively attend to relevant visual features, overcoming the limitations of previous convolutional-based approaches. * We design an audio-visual mixer, which can amplify relevant visual features and suppress irrelevant visual features in response to audio cues. * We present an intermediate mask loss that provides additional supervision, encouraging more accurate intermediate results and improving final prediction. * We conduct extensive experiments on three sub-tasks of AVS. These results demonstrate that our method achieves state-of-the-art performance on the AVS benchmark. § RELATED WORKS §.§ Multi-Modal Tasks In recent years, multi-modal tasks have gained significant attention in the research community due to their potential for advancing the understanding of complex real-world scenarios. These tasks aim to exploit the complementary information from different modalities, such as vision, audio, and text, to improve the overall performance of various applications. Among these, text-visual tasks have attracted considerable interest from researchers. Numerous works focus on related tasks, such as visual question answering <cit.> and visual grounding <cit.>. In addition to text-visual tasks, audio-visual tasks are emerging as hot spots in the multi-modal field. Several related tasks include audio-visual correspondence <cit.>, audio-visual event localization <cit.>, and sound source localization <cit.>. Concurrently, some works <cit.> have proposed unified architectures capable of handling nearly all types of modalities using a consistent format for embeddings. Most of these works are based on transformer architecture <cit.>, which demonstrates a strong cross-modal capability. Their achievements highlight the reliability of transformers in the multi-modal field. As a recently proposed multi-modal task, audio-visual segmentation (AVS) <cit.> shares many commonalities with the aforementioned tasks. The pioneering works in these areas have significantly inspired our research of AVSegFormer. §.§ Sound Source Localization Sound source localization (SSL) is an important problem in the audio-visual multi-modal community and is also the most related task to audio-visual segmentation. It aims to locate the regions for sound sources in a video sequence, but the results are usually represented as heat maps. The major challenge in the SSL problem is dealing with multiple sound sources. In prior arts, <cit.> divided audio-visual features into multiple clustering centers and used center distance as a supervised signal to locate paired audio-visual information. <cit.> trained an audio-visual correspondence model to extract coarse feature representations of audio and visual signals and used Grad-CAM <cit.> to locate specific categories of features. <cit.> adopted a two-stage approach by first learning audio-visual semantics under a single sound source and then using this knowledge to help locate multiple sound sources. Besides, <cit.> tackled the problem by unraveling the concept of categories in neural networks. Although these SSL methods indicate which areas in the image emit sound, they do not clearly depict the shape of the object, which is another challenge in the AVS task. Furthermore, the above methods all rely on unsupervised learning when capturing the shape of the detected object, which may result in inaccurate localization. Nevertheless, handling multiple sound sources in SSL methods offers valuable insights and references for our work. §.§ Vision Transformer During the past few years, Transformer <cit.> has experienced rapid development in natural language processing. Following this success, the Vision Transformer (ViT) <cit.> emerged, bringing the transformer into the realm of computer vision and yielding impressive results. Numerous works <cit.> have built upon ViT, leading to the maturation of vision transformers. As the performance of vision transformers continues to advance, they are increasingly replacing CNNs as the mainstream paradigm in the field of computer vision, especially in object detection and image segmentation tasks. <cit.> proposed the DETR model and designed a novel bipartite matching loss based on the transformer architecture, paving the way for new research directions in vision transformers. Subsequently, improved frameworks such as Deformable DETR <cit.> and DINO <cit.> are proposed, introducing mechanisms like deformable attention and denoise training. These arts take vision transformers to new heights. The remarkable performance of vision transformers has also inspired us to apply this paradigm to AVS tasks, anticipating further advancements in the field. §.§ Image Segmentation Image segmentation is a critical visual task that involves partitioning an image into distinct segments or regions. It includes three different tasks: instance segmentation <cit.>, semantic segmentation <cit.>, and panoptic segmentation <cit.>. Instance segmentation predicts the mask of each object instance and its corresponding category, while semantic segmentation needs to classify each pixel in the image into different semantic categories. Panoptic segmentation unifies instance and semantic segmentation tasks and predicts the mask of each object instance or background segment. Early research proposed specialized models for these tasks, such as Mask R-CNN <cit.> and HTC <cit.> for instance segmentation, or FCN <cit.> and U-Net <cit.> for semantic segmentation. After panoptic segmentation was proposed, some related research <cit.> were conducted and designed universal models for both tasks. The recent introduction of the transformer has led to the development of new models that can unify all the segmentation tasks. Mask2Former <cit.> is one such model that introduces mask attention into the transformer and improves MaskFormer <cit.>. Mask DINO <cit.> is a unified transformer-based framework for both detection and segmentation. Recently, OneFormer <cit.> presented a new multi-task universal image segmentation framework with transformers. These models have brought image segmentation to a new level, demonstrating the potential of transformer architecture in vision tasks. Considering that the AVS task involves segmentation, these methods have significantly contributed to our work. § METHOD §.§ Overall Architecture Figure <ref> illustrates the overall architecture of our method. In contrast to previous CNN-based segmentation methods <cit.>, we design a query-based framework to leverage the transformer architecture. Specifically, our model combines audio queries with learnable queries, allowing it to adjust its focus on visual features dynamically. Additionally, we design an audio-visual mixer and an intermediate mask loss ℒ_ inter as auxiliary components. The audio-visual mixer aids in amplifying relevant features and suppressing irrelevant ones, while the intermediate mask loss helps supervise intermediate predictions for enhancing performance. The pipeline of AVSegFormer consists of four stages. First, a visual backbone and an audio backbone are employed to extract features from video and audio frames, respectively. Second, the transformer encoder refines the visual features and generates an initial mask feature, which serves as the basis for predicting the final mask. Third, an audio-visual mixer is utilized to amplify feature channels relevant to sounding objects while suppressing those that are irrelevant. Lastly, the transformer decoder incorporates audio queries and learnable queries, capturing richer features about sounding objects and predicting the final mask. §.§ Multi-Modal Representation Visual encoder. We follow the feature extraction process adopted in previous methods <cit.>, which involves using a visual backbone and an audio backbone to extract video and audio features, respectively. For videos, the dataset provides pre-extracted frame images, making the process similar to image feature extraction. Specifically, the input video frames are denoted as x_ visual∈ℝ^T× 3× H× W, in which T denotes the number of frames. Then, we use a visual backbone (, ResNet-50 <cit.>) to extract hierarchical visual features ℱ_ visual, which can be written as: ℱ_ visual={ℱ_1,ℱ_2,ℱ_3,ℱ_4}, in which ℱ_i∈ℝ^T× C_i×H/2^i+1×W/2^i+1 and i ∈ [1, 2, 3, 4]. C_i represents the number of output channels of the i-stage of the visual backbone. In other words, ℱ_ visual is a list of multi-scale features, where each feature map is half resolution of the previous one. Audio encoder. The process of audio feature extraction follows the VGGish <cit.> method. Initially, the audio is resampled to 16kHz mono audio x_ audio∈ℝ^N_ samples× 96× 64, where N_ samples is related to the audio duration. Then, we perform a short-time Fourier transform to obtain a mel spectrum. The mel spectrum is calculated by mapping the spectrum to a 64th-order mel filter bank and subsequently fed into the VGGish model to obtain the audio features ℱ_ audio∈ℝ^T× d_ model, where T represents the number of frames and d_ model defaults to 128 in the VGGish. Feature transformation. For the convenience of subsequent usage, we use multiple linear layers to unify the number of channels for all features. Specifically, all features extracted by the visual backbone and audio backbone are transformed into D dimensions, which also equals the embedding dimension of the transformer encoder and decoder. Typically, D is set to 256 by default. §.§ Transformer Encoder The transformer encoder is responsible for multi-scale feature fusion and mask feature generation. Specifically, we collect the visual features of three resolutions (, 1/8, 1/16, and 1/32), and then flatten and concatenate them as the input for the transformer encoder. After that, the output features of the transformer encoder are reshaped to their original shapes, and the 1/8-scale features are taken out separately and 2× upsampled. Finally, we add the upsampled features to the 1/4-scale features from the visual backbone and obtain the initial mask features ℱ_ 𝓂𝒶𝓈𝓀∈ℝ^T× D× h× w, where h=H/4,w=W/4, and D is the embedding dimension of the transformer encoder and decoder. §.§ Audio-Visual Mixer As illustrated in Figure <ref>, the segmentation mask is generated based on the mask feature, which plays a crucial role in the final prediction results. However, since the audio features can vary widely, a static network may not be able to capture all of the relevant information. This limitation may hinder the model's ability to identify potential sounding objects accurately. To address this issue, we propose an audio-visual mixer as shown in Figure <ref>(b). The design of this module is based on channel attention, which allows the model to selectively amplify or suppress different visual channels depending on the audio input, improving its ability to capture complex audio-visual relationships. Specifically, the mixer learns a set of weights ω through audio-visual cross-attention, and applies them to highlight the relevant channels. The whole process can be represented as follows: ω = ℱ_ 𝒶𝓊𝒹𝒾ℴℱ_ 𝓂𝒶𝓈𝓀^T/√(D/n_ head)ℱ_ 𝓂𝒶𝓈𝓀, ℱ̂_ 𝓂𝒶𝓈𝓀=ℱ_ 𝓂𝒶𝓈𝓀+ℱ_ 𝓂𝒶𝓈𝓀⊙ω. Here, ℱ_ audio and ℱ_ mask represent the input audio features and the initial mask features, and ℱ̂_ mask denotes the mixed mask features. Besides, n_ head means the number of attention heads, which is set to 8 by default following common practice. §.§ Transformer Decoder Audio query. The transformer decoder is designed to learn semantic-rich features of the sounding objects. We repeat the audio feature ℱ_ 𝒶𝓊𝒹𝒾ℴ∈ℝ^T× 1× D to the number of queries N_ query, and employ it as the audio queries Q_ audio∈ℝ^T× N_ query× D. As the decoding process continues, the queries continuously aggregate visual information from the encoder's outputs. The output queries Q_ output ultimately combine the auditory and visual modalities and contain richer features of the sounding objects. Learnable query. To enhance the model's ability to capture in-depth information about the sounding object, we add learnable queries Q_ learn∈ℝ^T× N_ query× D to the audio queries Q_ audio. The learnable queries enhance our model's adaptability for various AVS tasks and datasets. Specifically, it enables the model to learn dataset-level contextual information, and adjust the attention allocated to different target categories. Furthermore, learnable queries empower the model with a more robust capability to process target semantics, ultimately improving segmentation accuracy. Mask generation. To generate the segmentation masks, we multiply the mask feature ℱ̂_ 𝓂𝒶𝓈𝓀∈ℝ^T× D× h× w obtained from the audio-visual mixer with the output queries Q_ output from the decoder. Then, an MLP is used to integrate different channels. Additionally, we introduce a residual connection to ensure that the fusion of auditory information does not result in excessive loss of visual information. Finally, the model predicts the segmentation mask ℳ through a simple linear layer: ℳ= Linear(ℱ̂_ 𝓂𝒶𝓈𝓀+ MLP(ℱ̂_ 𝓂𝒶𝓈𝓀· Q_ output)). Here, MLP(·) represents the MLP process, and Linear(·) means the linear layer. The output ℳ∈ℝ^T× N_ class× h× w is the predicted segmentation mask, with the dimension N_ class denotes the number of semantic classes. §.§ Loss Function Intermediate mask loss. With the introduction of the deep transformer decoder, achieving satisfying performance only by supervising the final prediction becomes more challenging. To address this issue, we design an intermediate mask loss ℒ_ inter as the auxiliary loss, which supervises each layer of the transformer decoder. Specifically, the intermediate mask loss is based on the cross-attention operation of each decoder layer. First, we utilize the input queries Q∈ℝ^T× N_ query× D and keys K∈ℝ^T×Σ h_iw_i× D to compute the attention map 𝒜=QK^T/√(D/n_ head)∈ℝ^T× N_ query×Σ h_iw_i. Then, we multiply the attention map 𝒜 with the output queries of the current decoder layer and feed it through a simple FPN <cit.> to predict a mask ℳ̂∈ℝ^T× N_ class× h× w, as shown in Figure <ref>(c). Finally, we calculate the Dice loss <cit.> between this predicted mask and the ground truth to obtain the intermediate mask loss ℒ_ inter. Total loss. The total loss function comprises two parts: IoU loss and mask loss. As shown in Figure <ref>(a), the IoU loss ℒ_ IoU is calculated by comparing the final segmentation mask with the ground truth. Here, we also use Dice loss <cit.> for supervision. Considering that in AVS tasks, the proportion of segmented objects occupying the entire image is relatively small, the model can better focus on the foreground and reduce interference from the background by using Dice loss. Thus, the total loss of our method is as follows: ℒ=ℒ_ IoU+λ∑_i=1^N_ layerℒ_inter^i. Here, N_ layer represents the number of decoder layers, and λ is a coefficient that controls the effect of the auxiliary loss. We choose λ=0.1 in our experiments as it performs best. § EXPERIMENTS §.§ Dataset AVSBench-Object <cit.> is an audio-visual dataset specifically designed for the audio-visual segmentation task, containing pixel-level annotations. The videos are downloaded from YouTube and cropped to 5 seconds, with one frame per second extracted for segmentation. The dataset includes two subsets: a semi-supervised single sound source subset for single sound source segmentation (S4), and a fully supervised multi-source subset for multiple sound source segmentation (MS3). S4 subset: The S4 subset contains a total of 4,932 videos, with 3,452 videos for training, 740 for validation, and 740 for testing. The target objects cover 23 different categories, including humans, animals, vehicles, and musical instruments. Besides, this subset is trained in a semi-supervised manner, where each video contains five frames, but only the first frame is annotated. MS3 subset: The MS3 subset includes 424 videos, with 286 training, 64 validation, and 64 testing videos, covering the same categories as the S4 subset. Additionally, unlike S4, the MS3 subset is full-supervised with all five frames annotated in training. AVSBench-Semantic <cit.> is an extension of the AVSBench-Object dataset, which offers additional semantic labels that are not available in the original AVSBench-Object dataset. It is designed for audio-visual semantic segmentation (AVSS). In addition, the videos in AVSBench-Semantic are longer, with a duration of 10 seconds, and 10 frames are extracted from each video for prediction. It combines semi-supervised and fully-supervised manners, with labels provided for the first and first five frames of videos inherited from the S4 and MS3 subsets, respectively. For the newly collected 7,000 videos, a complete 10-frame label is provided. Overall, the AVSBench-Semantic dataset has increased in size by approximately three times compared to the original AVSBench-Object dataset, with 8,498 training, 1,304 validation, and 1,554 test videos. Metric. These benchmark datasets employ mean intersection over union (mIoU) and F-score as the evaluation metrics. §.§ Implementation Details Our method is evaluated on three AVS sub-tasks, including single sound source segmentation (S4), multiple sound source segmentation (MS3), and audio-visual semantic segmentation (AVSS). Details. We train our AVSegFormer models for the three sub-tasks using an NVIDIA V100 GPU. Consistent with previous works <cit.>, we employ AdamW <cit.> as the optimizer, with a batch size of 4 and an initial learning rate of 2× 10^-5. All video frames are resized to 224 × 224 resolution. For the S4 and MS3 subsets, each video contains 5 frames, while each video in AVSS contains 10 frames. Since the MS3 subset is quite small, we train it for 60 epochs, while the S4 and AVSS subsets are trained for 30 epochs. The encoder and decoder in our AVSegFormer both are comprised of 6 layers with an embedding dimension of 256. We set the coefficient of the proposed intermediate mask loss to 0.1 for the best performance. More detailed training settings can be found in Table <ref>. §.§ Comparison with Prior Arts To verify the effectiveness of our method, we conducted a comprehensive comparison between our AVSegFormer and existing methods on the AVS benchmark <cit.>. For fairness, we employ the ImageNet-1K <cit.> pre-trained ResNet-50 <cit.> or PVTv2 <cit.> as the backbone to extract visual features, and the AudioSet <cit.> pre-trained VGGish <cit.> to extract audio features. Comparison with methods from related tasks. Firstly, we compare our AVSegFormer with state-of-the-art methods from three AVS-related tasks, including sound source localization (LVS <cit.> and MSSL <cit.>), video object segmentation (3DC <cit.>, SST <cit.> and AOT <cit.>), and salient object detection (iGAN <cit.> and LGVT <cit.>). These results are collected from the AVS benchmark <cit.>, which are transferred from the original tasks to the AVS tasks. As shown in Table <ref>, our AVSegFormer exceeds these methods by large margins. For instance, on the S4 subset, AVSegFormer-R50 achieves an impressive mIoU of 76.45, which is 1.51 points higher than the best LGVT. Although LGVT has a better Swin-T <cit.> backbone, our AVSegFormer with ResNet-50 backbone still performs better regarding mIoU. In addition, AVSegFormer-PVTv2 produces an outstanding mIoU of 82.06 and an F-score of 89.9 on this subset, which is 7.12 mIoU and 2.6 F-score higher than LGVT, respectively. On the MS3 subset, AVSegFormer-R50 outperforms the best iGAN with 6.64 mIoU and 8.4 F-score, while AVSegFormer-PVTv2 further raised the bar with an exceptional improvement of 15.47 mIoU and 14.9 F-score. On the AVSS subset, our AVSegFormer-R50 yields 24.93 mIoU and 29.3 F-score, which are slightly lower than the best method AOT. Nevertheless, AVSegFormer-PVTv2 obtains an impressive performance of 36.66 mIoU and 42.0 F-score, surpassing AOT by 11.26 mIoU and 11.0 F-score, respectively. Comparison with AVSBench baseline <cit.>. Then, we compare our AVSegFormer with the AVSBench baseline, which is the current state-of-the-art method for audio-visual segmentation. As reported in Table <ref>, on the S4 subset, AVSegFormer-R50 achieves 3.66 mIoU and 1.1 F-score improvements over AVSBench-R50, while AVSegFormer-PVTv2 surpasses AVSBench-PVTv2 by 3.32 mIoU and 2.0 F-score. On the MS3 subset, AVSegFormer-PVTv2 surpasses AVSBench-PVTv2 with a margin of 4.36 mIoU and 4.8 F-score. On the AVSS subset, AVSegFormer-R50 and AVSegFormer-PVTv2 achieve significant results with an mIoU improvement of 4.75 and 6.89, and a substantial F-score improvement of 4.1 and 6.8. These results demonstrate that AVSegFormer outperforms the AVSBench baseline on all sub-tasks, becoming a new state-of-the-art method for audio-visual segmentation. §.§ Ablation Study In this section, we conduct ablation experiments to verify the effectiveness of each key design in the proposed AVSegFormer. Specifically, we adopt PVTv2 <cit.> as the backbone and conduct extensive experiments on the S4 and MS3 sub-tasks. Other training settings are the same as in Section <ref>. Number of queries. To analyze the impact of the number of queries on the model's performance, we conducted experiments with varying numbers of queries for the decoder input, specifically 1, 100, 200, and 300. Our results reveal a positive correlation between the number of queries and the model performance, with the optimal performance obtained when the number of queries was set to 300. Table <ref> presents these findings. Effect of learnable queries. We further investigated the impact of learnable queries in the decoder inputs. As shown in Table <ref>, the improvement due to the learnable queries is relatively small in the single sound source task (S4), while it brings significant improvement in the multiple sound source task (MS3). This can be attributed to the complexity of sounding objects. In S4, since there is only one sounding object that remains unchanged, its auditory features are distinct enough. In contrast, in MS3, it becomes relatively difficult for the model to learn target features solely based on audio features, due to the mixing and change of sound sources. The learnable queries partially compensate for this limitation. The results indicate that although the concept of learnable queries is simple, it can produce excellent results in complex scenarios. Effect of audio-visual mixer. We then studied the impact of the audio-visual mixer in our model. There are two versions designed for this module, as illustrated in Figure <ref>. The cross-attention mixer (CRA) utilizes visual features as queries and audio features as keys/values for cross-attention, aiming to bring audio information into visual features in the early stages. The channel-attention mixer (CHA) introduced the mechanism of channel attention with audio features as queries and visual features as keys/values. As presented in Table <ref>, the design of CHA brought greater performance improvement compared to CRA. In addition, to validate our suppose that the audio-visual mixer can amplify relevant features while suppressing irrelevant ones, we also visualize features before and after the audio-visual mixer along with their predicted masks. Figure <ref> presents the visualization results. We compare the original features ℱ_ mask with the mixed features ℱ̂_ mask generated by the audio-visual mixer and selected 9 channels for rendering. It is evident that for the sounding object (right girl and dog), the mixer effectively enhanced its features. Meanwhile, the non-sounding objects (left girl, table/bed, or background) experienced some degree of suppression. As a result, the predicted mask without mixer may segment out the wrong target (left girl), and the mixer can lead to a more accurate prediction of the correct sounding object. These findings align with our hypothesis and further substantiate the effectiveness of the audio-visual mixer. Effect of intermediate mask loss. We finally conducted experiments to learn the impact of the intermediate mask loss. Similarly, we designed two versions for the mask reformer module as shown in Figure <ref>. In the attention-only reformer (AR), we directly feed the attention map of each cross-attention layer into an FPN network to generate intermediate masks. In the query-mixed reformer (QR), we multiply the attention map with the output queries of the corresponding decoder layer, and then feed the generated features into the FPN. The experimental results presented in Table <ref> demonstrate that both versions have brought some performance improvement, with the adopted QR outperforming AR. Qualitative analysis. We also present the visualization results of AVSegFormer compared with those of AVSBench on three audio-visual segmentation tasks in Figure <ref>. The top line of each task displays the raw images, and the second line displays the ground truth. The last two lines display the predicted masks by AVSBench and AVSegFormer, respectively. The visualization results clearly demonstrate that our method performs better than the previous method. It has a strong ability in target localization and semantic understanding, which can accurately segment and classify the sounding objects in each task. Furthermore, in multiple sound sources scenes, the model can effectively identify the correct sound source and accurately segment the target object. These results highlight the effectiveness and robustness of our method. § CONCLUSION In this paper, we propose AVSegFormer, a novel framework that leverages the power of transformers to achieve leading performance in audio-visual segmentation tasks. First, our method introduces learnable and audio queries, enabling our network to dynamically attend to relevant visual features and significantly enhance segmentation performance. Second, we design an audio-visual mixer that selectively amplifies or suppresses different visual channels, making the visual features better adapt to diverse audio inputs. Additionally, we propose an intermediate mask loss to improve the effectiveness of the training process. Our experimental results demonstrate the superior performance of AVSegFormer compared to existing state-of-the-art methods, and a series of ablation studies validate the effectiveness of our proposed components. ACM-Reference-Format § APPENDIX We listed the detailed settings of our models in Table <ref>.
http://arxiv.org/abs/2307.02119v1
20230705084745
Physics-assisted Deep Learning for FMCW Radar Quantitative Imaging of Two-dimension Target
[ "Zhuoyang Liu", "Huilin Xu", "Feng Xu" ]
eess.SP
[ "eess.SP" ]
ula[ULA]uniform linear array lfm[LFM]linear frequency modulation 2d[2D]two-dimension siso[SISO]single-input single-output rcs[RCS]radar cross section doi[DOI]domain of interested fmcw[FMCW]frequency-modulated continuous-wave fista[FISTA]fast iterative shrinkage-thresholding algorithm resnn[ResNet]residual neural network awgn[AWGN]additional white Gaussian noise relu[ReLU]Rectified linear unit snr[SNR]signal-to-noise-ratio mse[MSE]mean square error ssim[SSIM]structural similarity index measure cs[CS]compressed sensing Physics-assisted Deep Learning for FMCW Radar Quantitative Imaging of Two-dimension Target Zhuoyang Liu^*, Graduate Student Member, IEEE, Huilin Xu^*, Graduate Student Member, IEEE, Feng Xu^*, Senior Member, IEEE ^* Key Lab for Information Science of Electromagnetic Wave (MoE), Fudan University, Shanghai 200433, China, Email:{liuzy20,fengxu}@fudan.edu.cn, hlxu21@m.fudan.edu.cn ==================================================================================================================================================================================================================================================================================================================== Radar imaging is crucial in remote sensing and has many applications in detection and autonomous driving. However, the received radar signal for imaging is enormous and redundant, which degrades the speed of real-time radar quantitative imaging and leads to obstacles in the downlink applications. In this paper, we propose a physics-assisted deep learning method for radar quantitative imaging with the advantage of cs. Specifically, the signal model for fmcw radar imaging which only uses four antennas and parts of frequency components is formulated in terms of matrices multiplication. The learned fast iterative shrinkage-thresholding algorithm with residual neural network (L-FISTA-ResNet) is proposed for solving the quantitative imaging problem. The L-FISTA is developed to ensure the basic solution and ResNet is attached to enhance the image quality. Simulation results show that our proposed method has higher reconstruction accuracy than the traditional optimization method and pure neural networks. The effectiveness and generalization performance of the proposed strategy is verified in unseen target imaging, denoising, and frequency migration tasks. Radar quantitative imaging, cs, physics-assisted deep learning, L-FISTA-ResNet. § INTRODUCTION Radar imaging has attracted considerable attention in remote sensing, detection, and autonomous driving areas <cit.>, due to its capability of penetrating clothes and clouds while having strong reflections for metallic materials. Quantitative radar imaging plays a key role in accurately measuring and characterizing targets and enhances the understanding of the physical properties of objects and environments. However, in the case of high-resolution imaging, we need a high sampling rate which will lead to huge memory consumption and be hard to deal with <cit.>. Compressed sensing has emerged as a promising approach for radar imaging, enabling the reconstruction of the target posture and physical properties of materials <cit.>. A large body of research employs cs-based techniques with sparsity to recover high-resolution target images from limited radar data, which offer stability and good interpretability <cit.>. Unlike the traditional imaging methods, the cs-based approach, such as fista<cit.>, first models the imaging process as an optimization problem and then utilizes the regularization technique to constrain the sparsity and reconstruct the image of the interested target <cit.>. However, the iterative nature of the fista <cit.> often leads to poor computational efficiency, particularly in scenarios involving large-scale and high-resolution imaging tasks. These iteration-based algorithms exhibit high sensitivity to hyperparameter selection and require significant effort for tuning. Moreover, they may encounter challenges in capturing nonlinear relationships behind complex radar data. In recent years, deep learning approaches have made remarkable achievements in various fields <cit.>. Naturally, neural networks, characterized by their powerful feature representation and parallel processing capabilities, have emerged as novel tools for radar imaging <cit.>. Generally, neural networks are employed to establish a mapping between raw data and imaging results, facilitating faster speed and more accurate physical properties reconstruction. Nevertheless, several challenges remain. Firstly, these models rely heavily on enormous training samples, which are often scarce in real-world scenarios. Moreover, as black-box models, they lack good interpretability. Therefore, several efforts have been made to integrate physics-assisted optimization algorithms with neural networks in order to balance imaging quality, computational efficiency, and model interpretability. Wang et al. propose LFISTA-Net, which incorporates fista <cit.> and a deep neural network for precise reconstruction in mmW 3-D holography, and achieves high speed and low computational cost <cit.>. Xiang et al. present FISTA-Net's outstanding performance for diverse imaging tasks, such as Electromagnetic Tomography (EMT) with strong generalization and noise robustness <cit.>. Inspired by previous works, we consider the quantitative imaging for the 2d target with a sparse sampling of the raw data. In particular, we seek to reconstruct the rcs map of the target in case of dealing with sparse sampled signals. In the following sections, we first introduce the general fmcw radar signal model and formulate the optimization problem for rcs reconstruction. To tackle the corresponding problem, we develop a physics-assisted deep learning method, that combines the benefits of fista and the deblurred mechanism of the resnn. Specifically, We describe the standard fista, which is extended into a learnable architecture, and present the overall architecture of the proposed L-fista-resnn. Finally, we provide extensive comparison experiments to evaluate the performance of our method on the synthesized dataset. § SYSTEM MODEL AND PROBLEM FORMULATION §.§ General FMCW Signal Model We consider the fmcw radar quantitative imaging for the 2d scenario, as shown in Fig. <ref>. The 2d target is put in the center of the doi, and the ula consisting of K antennas is deployed at 2 m from the coordinate center, parallel to the x-axis. Specifically, we mesh the 2d target into M points and all antennas are siso and work in lfm mode. Thus, the received radar beat signal of the k=1,...,K antenna at time step t is given by s(t,k)=∑_m=1^Mϵ_mexp(-j2π (f_0+K_r t) · 2τ_k,m),  0≤ t ≤ T, where T is the chirp duration, ϵ_m is the rcs of the m-th point of the 2d target, τ_k,m is the time delay of the echo from the k-th antenna to the m-th point, f_0 is the starting frequency, and K_r is the rate of the frequency sweep of each chirp. Let B denote the bandwidth and N_f be the number of frequency grids, then (<ref>) can be further transformed into the frequency domain, expressed as s(n,k)= ∑_m=1^Mϵ_mexp(-j2π(f_0+B/N_fn) ·2R_k,m/c) , n=0,...,N_f-1 , where c is the speed of light, and time delay τ_k,m is represented as R_k,m/c with R_k,m being the distance between the m-th point and the k-th antenna. For analysis, we mesh the doi into P grids and each grid only contains one point of the 2d target. Therefore, the radar echo of each antenna is rewritten as s(n,k)= ∑_p=1^Pϵ_pexp(-j2π(f_0+B/N_fn) ·2R_k,p/c) , n=0,...,N_f-1 , where R_k,p is the distance between the p-th grid and the k-th antenna, and ϵ_p is the rcs of the p-th grid. Obviously, if the p-th grid contains the m-th point of the 2d target, it satisfies ϵ_p = ϵ_m, otherwise ϵ_p = 0. Next, we assemble the rcs corresponding to each grid of the doi into a vector ϵ=[ϵ_1,...,ϵ_P]^T∈R^P× 1, and collect the distance between each grid to the k antenna into r_k=[R_k,1,...,R_k,P]^T∈R^P× 1. For the k-th antenna, the sensing matrix A_k is denoted by A_k = exp(-j4πfr_k^H/c) where f=[f_1,..., f_N_f]^T∈R^N_f× 1 is the frequency sweep vector with f_n=f_0+B/N_f(n-1). Then, by concatenating the sensing matrix A_k∈C^N_f× P of each antenna into a big matrix A=[A_1^H,...,A_K^H]^H∈C^N_fK× P, the radar echo s∈C^N_fK× 1 in (<ref>) with awgn v is represented by matrix forms, s=Aϵ+v. The generated radar echo in (<ref>) is huge and has redundant information about the target in these K channels. Therefore, by utilizing the sparse sampling technique for antenna channels and wide-band frequency samples, we can reduce the original radar echo in (<ref>) into an affordable memory consumption. Particularly, the number of antennas is reduced to K=4, and the interval distance of two antennas is c/2f_0. The real-time sample rate for one chirp is defined as f_ADC=B/N_f with N_f=50 to ensure the bandwidth of the down-sampled signal is the same as the original one. §.§ Problem Formulation In radar quantitative imaging, the goal is to recover the rcs map from the received radar signal which is the superposition of the scatterings from the whole doi. To address this problem, we utilize optimization approaches to find the feasible solution of (<ref>). Obviously, the sensing matrix A satisfies N_fK<<P after down-sampling, so that ϵ reconstruction is an underdetermined problem that has multiple feasible solutions. However, the number of points for the 2d target M is much smaller than P due to the space sparsity of the target in the doi <cit.>. That is the underdetermined equation in (<ref>) can be solved by using cs. To this end, the energy function is the evaluated metric of the ϵ reconstruction, given by E(ϵ)≜1/2||s-Aϵ||_2^2+λ||ϵ||_1, where λ is the penalty part for the balance of the sparsity and fidelity to the measurement. This underdetermined equation in (<ref>) is always solved by rewriting it as the following optimization problem ϵ̂ = min_ϵ E(ϵ), where ϵ̂ is the reconstructed rcs map. The cs algorithm in (<ref>) for solving (<ref>) is to minimize the reconstructed error while imposing the space sparsity of the target. To be concrete, we use the L_2 norm to evaluate the quantitative imaging error and the L_1 norm for the sparsity constraint. Finally, we have the constrained minimization problem in (<ref>) and propose a physics-assisted deep learning method to address it, which will be detailed in the next section. § LEARN-FISTA-RESNET FOR QUANTITATIVE IMAGING In this section, we develop the physics-assisted deep learning method for solving the fmcw radar quantitative imaging. We begin with the traditional fista and then extend it to the learn-fista-resnn. §.§ The Preliminaries of the FISTA The quantitative imaging, modeled as a constrained minimization problem in (<ref>), is a linear inverse problem. Here we introduce the standard fixed-step fista <cit.> to address it, as shown in Algorithm. <ref>. To accomplish the basic gradient descent during iterations, we utilize the smallest Lipschitz constant Lips of the gradient ∇ (1/2||s-Aϵ||_2^2) to represent the basic shrinkage step μ with <cit.> 1/μ=Lips=A^HA/λ_max, where the shrinkage step μ depends on the sensing matrix A. Let x_i for i = 0,..., M-2 be defined as the estimated rcs map during the iterative loop with M-1 being the maximum iteration. To start fista, the inputs x_0 and x_1 are initialized as zero vectors; and other parameters t_0 and 𝒮_λμ(·) in Algorithm. <ref> are defined as those in <cit.>. However, it's challenging to find the best penalty parameter λ for the specific scenario, and the fixed-step fista always requires immense iterations for convergence. These obstacles motivate learnable fista and deep learning methods, which will be explained in the next section. §.§ L-FISTA-ResNet Architecture We convert the standard fixed-step fista into a learnable architecture L-fista with the following definitions: grad_t = t_i-1/t_i+1, grad_y =(I-μA^HA), grad_s =μA^H. Based on the expressions of gradients for different temporary functions, the pipeline of the L-fista is shown in Fig. <ref>(a). L-FISTA-BLOCK is designed as rewriting each iteration of the fista into the learnable form. Specifically, the shrinkage step μ and penalty parameter λ are learned by training. The non-differentiable shrinkage operator 𝓈_λμ is replaced by the relu function with the threshold λμ being learnable. In summary, the threshold of the relu function and two gradients grad_y, grad_x are learned automatically during training rather than by handcrafted design. In the forward process, the cascaded L-fista blocks perform multiple iterations and then the output is considered as the convergent solution. Since we use the down-sampled radar echo as the measured signal, the reconstructed rcs map will be blurred and diverged. Combining with the concept of the resnn <cit.>, the residual blocks can help with deblurred the damaged images. We then refine the initial result obtained from the cascaded L-fista blocks with stacked residual blocks to generate the final reconstructed images. Fig. <ref>(b) shows the implementation of the standard Res-BLOCK which is designed as the deblurring procedure. To sum up, we propose the L-fista-resnn to achieve the quantitative imaging, where the initial result is generated by L-fista and the improved image is obtained by the resnn, as shown in Fig. <ref>(c). From the perspective of physics-assisted deep learning techniques, we develop the L-fista to ensure the basic solution of the radar quantitative imaging and leverage the resnn to enhance the imaging quality for weakening the impact of radar signal down-sampling. §.§ Loss Function Based on the L-fista-resnn architecture, we introduce the loss function design in terms of the objective function in <ref>. We consider the following hybrid loss function: L=ϵ-ϵ̂_2^2+λ_1ϵ-ϵ̂_1+λ_2s-Aϵ̂_2^2, where the first term uses the L2 norm to denote the fidelity to the measurement, and the second term applies the L1 norm between the ground truth and the predicted result to evaluate the sparsity. The last term in (<ref>) represents the reconstruction error between the radar echo generated from the prediction results and the measured received signal. To balance the fidelity, sparsity, and reconstruction error, the penalty parameters are chosen empirically λ _1 = 0.1 and λ _2 = 0.05. § EXPERIMENTS In this section, we begin with a brief introduction of the dataset, metrics, and implementation details. Then quantitative and qualitative experimental results are presented to verify the effectiveness of the proposed method compared with several baselines. Finally, we evaluate the generalization ability of methods on unseen samples of diverse shapes of objects, SNRs, and center frequencies. §.§ Experimental Setups Dataset and Metrics: We conducted experiments on the widely used MNIST handwritten digit dataset. Parameters of radar echo generation are listed in Table <ref>. We randomly select 2000 samples, with 800 for training, 200 for validation, and 1000 for testing. Two widely used metrics, mse and ssim, are chosen to evaluate the imaging quality. Baselines: Several baselines are chosen to highlight key components of the proposed L-fista-resnn. * FISTA: fista <cit.> is a powerful optimization algorithm used for sparse signal recovery and image reconstruction, as shown in Algorithm. <ref>. * FISTA-ResNet: Compared with L-fista-resnn, we discard the two learnable hyperparameters in the L-fista block and instead set them as fixed values. * DNN: We replaced the stacked L-fista blocks in L-fista-resnn with simple fully connected layers. Implementation details: We train models with Adam optimizer with a min-batch of 16 for 100 epochs on a single NVIDIA GeForce RTX 3090 GPU. With an initial learning rate of 1e-2, We reduce the learning rate by 0.1 when the validation loss stops decreasing for over 10 epochs. Particularly, For L-fista-resnn, the number of L-fista blocks is set to 20. For fista, the maximum number of iterations is 2000. For fista and fista-resnn, hyperparameter λ is selected from [0.001, 0.005, 0.01, 0.05, 0.1] and set to the value that achieves the best performance on the training set. Thus λ is set to 0.001 and 0.01 respectively for fista and fista-resnn. §.§ Result Analysis Performance comparisons of four methods are listed in Table <ref>. Besides, we also report model parameters and inference time per sample to evaluate the computational efficiency. Our method outperforms the baseline methods by a large margin. On the one hand, due to parallel computation and the powerful fitting ability of neural networks, the proposed method has higher efficiency and better performance compared to the traditional fista algorithm. On the other hand, the learnable fista block effectively helps maintain great performance while alleviating the dependence on a large number of samples. Qualitative results are shown in Fig. <ref>. To verify the generalization ability of the proposed model to unseen samples, we conduct a series of experiments. In the later experiments, the models are still trained on the MNIST dataset, but are tested on samples synthesized from targets of different shapes or with different simulation settings. We randomly selected several letter-shaped targets and targets consisting of classical shapes as test targets. Fig. <ref> shows the reconstruction results of different methods, the first column is ground truths, the second column is results, and the third column is the absolute error between reconstructions and ground truths. Although L-fista-resnn is trained only on the digit-shaped MNIST dataset, it still achieved satisfactory reconstruction performance on a variety of targets with different shapes. This phenomenon reflects that L-fista-resnn effectively models the physical mapping between the input echoes and the output rcs images. For the purpose of assessing the method's robustness to noise, we add additional white Gaussian noise to echoes with varying signal-to-noise ratios to the echo data and reconstruct the rcs map using the trained model above in the testing phase. It should be emphasized that the radar echoes used for training are perfect, i.e., noise-free, during the training process. Fig. <ref> depicts visualizations of reconstructions under different snr settings and Fig. <ref> displays the performance fluctuation when snr changes. The results demonstrate that the proposed L-fista-resnn has excellent noise robustness and maintains outstanding performance even under challenging low snr conditions. Besides, fista is sensitive to the hyperparameter λ. When λ decreases, the performance increases at high snr but decreases at low snr, i.e., the noise robustness worsens Further, we also tested the trained models on unseen echo samples synthesized at different center frequencies, as shown in Fig. <ref>. It can be seen that our model has great generalization ability even at different degrees of center frequency shifts. fista-resnn shows slightly better generalization ability because its hyperparameter μ is adaptively calculated with center frequency changes when μ is fixed in L-fista-resnn. Similarly, FISTA has similar performance at different center frequencies. Thus there is a compromise between performance, computational efficiency, and frequency generalization. § CONCLUSION In this work, we achieved the fmcw radar quantitative imaging for 2d targets. With the principle of cs, we characterized the quantitative imaging as a constrained minimization problem. To address the constrained problem, we proposed a physics-assisted deep learning approach that combined the advantages of traditional optimization methods and neural networks for fmcw radar quantitative imaging. The proposed L-fista-resnn consists of two key components, the L-fista block, and the Residual block. The fista component effectively alleviated the dependence of deep learning-based models on a large number of samples while the powerful fitting ability and parallelizable natures of neural networks greatly improved the quality and speed of quantitative imaging. Quantitative and qualitative experimental results showed that L-fista-resnn outperformed the traditional fista method in terms of imaging quality and computation time. Our method not only achieved a two-orders-of-magnitude acceleration in inference but also maintained high imaging results with the ssim of up to 0.94. Compared with pure neural networks and the standard fista, the proposed method achieved the best compromise between computational efficiency and image quality. Moreover, we compared the imaging performance with different datasets: unseen targets dataset, noised raw data with varying snr, and raw data performed for different center frequencies. The numerical results verified the robustness and generalization ability of the proposed model and demonstrated the capability of the L-FISTA-ResNet to learn the physical mechanism behind the electromagnetic data. IEEEtran
http://arxiv.org/abs/2307.00726v1
20230703031144
Numerical exploration of the Aging effects in spin systems
[ "Roberto da Silva", "Tânia Tomé", "Mário José de Oliveira" ]
cond-mat.stat-mech
[ "cond-mat.stat-mech" ]
Physics Letters A 1 - Instituto de Física, Universidade Federal do Rio Grande do Sul, Porto Alegre, Rio Grande do Sul, Brazil 2 - Instituto de Física, Universidade de São Paulo, São Paulo, São Paulo, Brazil An interesting concept that has been underexplored in the context of time-dependent simulations is the correlation of total magnetization, C(t) . One of its main advantages over directly studying magnetization is that we do not need to meticulously prepare initial magnetizations. This is because the evolutions are computed from initial states with spins that are independent and completely random. In this paper, we take an important step in demonstrating that even for time evolutions from other initial conditions, C(t_0,t), a suitable scaling can be performed to obtain universal power laws. We specifically consider the significant role played by the second moment of magnetization. Additionally, we complement the study by conducting a recent investigation of random matrices, which are applied to determine the critical properties of the system. Our results show that the aging in the time series of magnetization influencesthe spectral properties of matrices and their ability to determine the critical temperature of systems. § INTRODUCTION Which temporal phase of the spin system evolution contains information regarding the criticality of a physical system? Furthermore, is it feasible to retrieve certain initial behaviors of such a system following a period of aging? Particularly, in the context of time-dependent Monte Carlo (MC) simulations, we are asking whether it is possible to observe temporal power-law behavior, as predicted by the short time dynamics theory <cit.>, at the critical temperature, even when the spatial correlation ⟨σ _iσ _j⟩≠ 0 (as it algebraically decays). This behavior is expected for initial conditions where ⟨σ _iσ _j⟩ =0. Let us consider the question from an even more specific point of view. Let us suppose the Ising model on a d-dimensional lattice under an initial condition where the spins are randomly and equiprobabilistically distributed, such that ⟨σ _iσ _j⟩ =0 and ⟨σ _i⟩ =0. After a certain time t_0 , we observe that ⟨σ _iσ _j⟩≠ 0, but ⟨σ _i⟩ =0 still holds true. Therefore, if we initiate the simulations with this new initial condition, can we obtain the same temporal power laws with the same exponents? In other words, is aging an important factor? An interesting measure in the context of nonequilibrium time-dependent Monte Carlo simulations (TDMCS) is the autocorrelation (spin-spin) <cit.>. Let us consider the calculation for an arbitrary t_0: A(t,t_0)=1/N⟨∑_i=1^Nσ _i(t)σ _i(t_0)⟩, with average taken different time evolutions from different random initial configurations. In a highly informative and comprehensive reference by Henkel and Pleimling <cit.>, it has been demonstrated that when a system is prepared at a high temperature and suddenly quenched to a critical temperature, the evolution of A(t,t_0) in different spin systems suggests the presence of a dynamical scaling behavior underlying the aging process. The same thing can be observed in <cit.>. This behavior can be described by the following equation: A(t,t_0)=t_0^-bf(t/t_0) Here, the parameter b is defined as b=(d-2+η )/z, where d represents the dimensionality of the system and η is a critical exponent. The function f(x) exhibits the property f(x)∼ x^-d/z as x approaches infinity. An alternative approach to investigate the early stages of time evolution in spin systems is to examine the correlation of the total magnetization. This correlation is defined as: C(t)=1/N^2⟨∑_i=1^N∑_j=1^Nσ _i(t)σ _j(0)⟩ =⟨ m(t)m(0)⟩ Here, N represents the number of spins in the system, and σ _i(t) denotes the spin value of spin i at time t. The angular brackets ⟨·⟩ denote the average over different time evolutions and initial configurations. This correlation provides insights into the relationship between the magnetization at time t and the initial magnetization at time 0. Tome and Oliveira <cit.> proposed and demonstrated that the correlation C(t) follows a power-law behavior, C(t) ∼ t^θ, when the initial magnetization ⟨ m_0⟩ is zero and the spins at time 0 are equally likely to be +1 or -1 p(σ _j(0)=+1)= p(σ _j(0)=-1)=1/2, for j=1,...,N. The exponent θ is the same as the magnetization exponent obtained in time-dependent simulations within the context of short-time dynamics <cit.>. However, in those simulations, the initial conditions require a fixed initial magnetization m_0<<1, which necessitates preparation and extrapolation as m_0 approaches 0. This approach is computationally more demanding. At this juncture, it becomes intriguing to investigate the behavior of spin systems when we examine the total correlation between time t_0 and a subsequent time t, denoted as C(t,t_0)=⟨ m(t)m(t_0)⟩. This correlation will be contingent upon both the initial time t_0 and the final time t. Additionally, we can explore how aging influences the determination of criticality within the system. By defining Δ t=t-t_0, we can assert that such behavior is dependent on the waiting time, denoted as t_0, and the observation time, represented by Δ t. For this analysis, we employ a recent technique that involves constructing Wishart-like matrices using the time evolutions of magnetization. The spectral properties of these matrices are highly valuable in capturing the critical properties at the initial stages of the evolution, as demonstrated in our previous works. Therefore, we conducted computational experiments to investigate the behavior of this method when we vary t_0 while keeping Δ t fixed. In the following section, we provide comprehensive details regarding our scaling approach for C(t,t_0), the fundamental properties of the Wishart-like spectra, as well as pedagogical studies to substantiate the forthcoming results in this work. Subsequently, we present our findings, followed by concluding remarks in the final section. § METHODS AND PREPATORY STUDIES The total correlation, as defined by Equation <ref>, assumes averages over random initial configurations of a system with spins σ _j(0), where j=1,...,N, independently chosen according to: p(σ _j(0)=+1)= p(σ _j(0)=-1)=1/2 (high temperature). In this case, if N_+(t) represents the number of spins up and N_-(t) represents the number of spins down, we can express it as follows: m(0)=m_0=1/N[ N_+(0)-N_-(0)] ⟨ m_0⟩ =0. But, ⟨[ N_+-N_-] ^2⟩ =⟨ N_+^2⟩ +⟨ N_-^2⟩ -2⟨ N_+(N-N_+)⟩. If ⟨ N_+^2⟩ =⟨ N_-^2⟩ =N/4+N^2/4 and ⟨ N_+(N-N_+)⟩ =N⟨ N_+⟩ -⟨ N_+^2⟩ =N^2/4-N/4. Therefore, we have ⟨( Δ m_0) ^2⟩ =1/N, which implies a standard normal distribution for the initial magnetization when N>>1 given by: p(m_0)=√(N/2π)e^-N/2m_0^2 However, when we consider the time evolution of different time-series starting from these prepared initial conditions, using, for example, the Metropolis dynamics as a prescription for these evolutions, the initial distribution of magnetization degrades. This degradation can be described for arbitrary t by distribution: P(m(t))=1/√(2π At^ξ)exp[ -m(t)^2/2At^ξ] given that: [ ⟨ m(t)^2⟩ -⟨ m(t)⟩ ^2 ≈ ⟨ m(t)^2⟩; ; = A t^ξ ] This is expected since according to short-time theory, ⟨ m(t)⟩ =0, and for m_0≈ 0, one would expect that ⟨ m^2⟩∼ t^ξ, where ξ =(d- 2β/ν)/z. Here A is a constant that can be fitted. Figure <ref> pedagogically illustrates this aging phenomenon. First, in Fig. <ref> (a), we observe different evolutions of magnetization in the two-dimensional Ising model for various values of t_0, while keeping the observation time Δ t constant at 300. We can observe histograms of magnetization for different values of t_0 in Fig. <ref> (b), following a Gaussian distribution (Eq. <ref>) with variance defined by Eq. <ref>. The Gaussian behavior is disrupted at equilibrium (t_0∼ 4000). The inset plot in the same figure demonstrates that ⟨ m^2⟩ -⟨ m⟩ ^2 and ⟨ m^2⟩ exhibit the same power-law behavior, as ⟨ m⟩≈ 0. Fitting Eq. <ref> yields the well-known result from the literature: ξ =0.801(1) for ⟨ m^2⟩, even without starting from initial configurations with m_0=0, as is traditionally done in computer simulations within the context of short-time dynamics. Additionally, we obtained A=0.00026(3). From a simulation standpoint, the idea is to interrupt the simulation while preserving the configuration at time t_0. This configuration is then used as the initial state to calculate the correlation C(t,t_0). The first crucial aspect is to determine if there is a finite time scaling for C(t,t_0) as predicted by A(t,t_0). In other words, for very large t_0, C(t,t_0) does not depend on t_0. However, according to scaling theory, for sufficiently large but not excessively large t_0, C(t,t_0) still exhibits a dependence on t_0. In this paper, we aim to address this point and propose a conjecture regarding the aging time scaling law: C(t,t_0)=t_0^ξg(t/t_0) where ξ =d-η/z, and g(x)∼ x^θ. Here, η = 2β/ν, and based on short-time theory, the exponent ξ is precisely expected in the second moment of magnetization ⟨ m^2⟩∼ t^ξ when starting from random initial conditions with m_0 exactly equal to 0. Using the Ising model as a simplification, we aim to verify such scaling. We will demonstrate that considering the magnetization distribution from Eq. <ref> to select spins is sufficient to reproduce C(t,t_0). However, we must also scale the time by t_0 to account for the effects of non-zero spatial correlations ⟨σ _iσ _j⟩. Another important aspect addressed in this paper is the determination of the critical properties of the system when it is out of equilibrium. Specifically, we investigate the role of t_0 in determining the critical properties of the system. To examine this, we explore the effects of t_0 on the short-time properties of the system using a recent method based on random matrices. We developed this method to determine criticality by analyzing spectral quantities obtained from Wishart-like matrices constructed from the time evolutions of magnetization. In this current manuscript we will demonstrate that the spectra is significantly influenced when large values of t_0 are used. In the next subsection, we will provide a brief description of this method. §.§ Criticality in nonequilibrium regime using Wishart-like matrices of magnetization The signature of criticality out of equilibrium seems to be even more prominently manifested than what can be observed when uncorrelated systems (T→∞) are brought to finite temperatures, particularly around T≈ T_C. In a recent study presented at <cit.>, we examined the response of spectra in random matrices constructed from time evolutions of magnetization in earlier stages of a spin system. Our findings demonstrated the influence of criticality out of equilibrium on the spectral properties of statistical mechanics systems. We specifically utilized the short-range two-dimensional Ising model as a test model, as well as long-range mean-field systems (<cit.>). To conduct such a test, we need to construct the magnetization matrix element m_ij, which represents the magnetization of the j-th time series at the i-th Monte Carlo step of a system with N spins. Here, i ranges from 1 to N_MC, and j ranges from 1 to N_sample. Therefore, the magnetization matrix M has dimensions N_MC× N_sample. To analyze spectral properties, an interesting alternative is to consider not M, but the square matrix of size N_MC× N_sample: G=1/N_MCM^TM , where G_ij=1/N_MC∑_k=1^N_MCm_kim_kj, which is known as the Wishart matrix <cit.>. At this stage, instead of working with m_ij, it is more convenient to utilize the matrix M^∗, defining its elements with the standard variables: m_ij^∗=(m_ij-⟨ m_j⟩ )/√(⟨ m_j^2⟩ -⟨ m_j⟩ ^2), where ⟨ m_j^k⟩ =1/N_MC∑_i=1^N_MCm_ij^k. Therefore, G_ij^∗=⟨ m_im_j⟩ -⟨ m_i⟩⟨ m_j⟩/σ _iσ _j, where ⟨ m_im_j⟩ =1/ N_MC∑_k=1^N_MCm_kim_kj and σ _i=√(⟨ m_i^2⟩ -⟨ m_i⟩ ^2). Analytically, if m_ij^∗ are uncorrelated random variables, in this case, the density of eigenvalues σ (λ ) of the matrix G^∗ follows the well-known Marcenko-Pastur distribution, which is expressed as: σ (λ )={[ N_MC2π N_sample√((λ -λ _-)(λ _+-λ ))λ   if    λ _-≤λ≤λ _+; ; 0 otherwise ]. where λ _±=1+N_sample/N_MC± 2√(N_sample/N_MC). However, for T≠ T_c, σ (λ ) does not follow the equation <ref>. The behavior of σ (λ ) obtained from MC time series simulated at different temperatures suggests a strong conjecture that the average eigenvalue ⟨λ⟩ =∫_0^∞λσ (λ )dλ reaches a minimum at the critical temperature, while the variance var(λ )=⟨λ ^2⟩ -⟨λ⟩ ^2 exhibits an inflection point at the same critical temperature, where ⟨λ ^2⟩ =∫_0^∞λ ^2σ (λ )dλ. Alternatively, a more precise identification can be made using the negative of the derivative of the variance: c=-∂ var(λ )/∂ T This behavior is also observed in the Potts model <cit.>. Therefore, the idea here is to observe if such fluctuations behave differently when we vary the starting index i from t_0 to t=t_0+Δ t, while keeping Δ t=N_MC fixed. § RESULTS We conducted two-dimensional Monte Carlo (MC) simulations, varying t_0. In all numerical experiments of this study, we used L=128. By starting from random initial configurations with ⟨ m_0⟩ =0 , we calculated C(t,t_0) considering averages over N_run=40000 different runs. We explored different values of t_0. The initial question to address is determining the optimal value of τ for which the quantity C(t,t_0) ×ln (t-t_0+τ ) follows a power law. Is τ approximately equal to t_0? Thus, in order to check if τ ≈ t_0, for each t_0, we vary τ and examine the behavior of C(t,t_0) as a function of t-t_0+τ in a log-log scale for different values of τ. Figure <ref> (a) depicts the case where t_0=100. The power-law behavior occurs (qualitatively) when τ≈ t_0=100. This is supported by the maximum coefficient of determination of the fitting shown in Fig. <ref> (b). The maximum occurs when τ≈ t_0=100. The values of θ are represented in different colors according to the gradient in the legend. The optimal situation includes a value of θ≈ 0.19, as expected. Figure <ref> (c) illustrates the linear behavior of the optimal value of τ as a function of t_0. The linear fit yields τ =b t_0 with b=1.03± 0.02. Error bars are obtained from 5 different seeds. Finally, Figure <ref> (d) displays the corresponding values of θ for the optimal values of τ found for different t_0 values. The green line corresponds to the value observed in short-time simulations from Ising-like models (in the same universality class) mentioned in various references (see, for example: <cit.>). We now test the scaling relation given by Equation <ref>. To do so, we consider the correlation divided by the initial second moment: C^∗(t,t_0)=⟨ m(t)m(t_0)⟩/⟨ m(t_0)^2⟩ as function of t, t-t_0, and finally t/t_0, presented in three different plots, all of them using log-log scale for the quantities, here indexed by (a), (b), and (c) respectively in Fig. <ref>. Fig. <ref> (c) suggests that scaling described by Eq: <ref>. This scaling is performed using the quantity ⟨ m(t_0)^2⟩ (or ⟨ m(t_0)^2⟩ -⟨ m(t_0)⟩ ^2 since ⟨ m(t_0)⟩ =0). Alternatively, we can perform the scaling <ref> by dividing C(t,t_0) by t_0^b while adjusting the value of ξ to optimize the scaling. We also conducted a test to verify this, and the results are presented in Fig. <ref>. We found that b=0.806 is the optimal value that matches Eq. <ref>, which is very similar to ξ =0.8010(4), the expected exponent for the time evolution of ⟨ m^2(t)⟩. Since the magnetization distribution at an arbitrary time t_0 follows Eq. <ref>, the question is whether considering an initial condition with magnetization distributed accordingly would yield the same correlation C(t,t_0) as calculating it with the initial condition that the system obtained at that time. In other words, does the obtained C(t,t_0) remain the same? To explore this, we prepared systems with the initial condition described by Eq. <ref> using many different samples with different m_0 values, but with the condition ⟨ m_0^2⟩ =At_0^ξ. We chose t_0 values of 50,100,200, and 300, which resulted in √(⟨ m_0^2⟩)=A^1/2t_0^ξ /2=0.077, 0.102, 0.135, and 0.158, respectively. So, using these standard deviations, we generate m_0 according to Eq. <ref>, and the spins are randomly chosen with the probability: p(σ _i)=1+m_0σ _i/2, where σ _i=1,...,N. It is important to note that p_-+p_+=1. We then evolve the system and compare C(t,t_0) with the results obtained by preparing the initial condition according to a Gaussian distribution with the predicted variance previously established. Figure <ref> illustrates this comparison. It is essential to mention that we had to scale the time by multiplying t by t_0 in the case of evolutions with mimicked initial conditions. It is interesting because it suggests that the spatial correlation of spins has an important role, and its effects determine the time scaling of the system and not only the distribution of magnetization to be a Gaussian according to Eq. <ref>. However, we can observe that curve for C(t,t_0) can be reproduced if we suitably scale the time. §.§ Aging and random matrices Finally, we test the effects of aging on the spectral method sensitive to determine the critical properties of the system. So we build matrices for N_sample=100, considering Δ t=N_MC=300, and considering different values of t_0. The result is interesting because for t_0=50 , for example, we observe a minimum of eigenvalue mean at T=T_C in previous works, however when the aging is more significant, we observe a visible deviation of such minimum as suggested by Fig. <ref> (a). A deviation from the minimum at T=T_c is found for t_0>50, as can be observed. Thus it is interesting the sensitivity of spectra considering time series with aging. The same can be observed on the other spectral parameters such as the variance of eigenvalues for different temperatures Fig. <ref> (b), and Fig. <ref> (c), that shows the pronounced peak on the negative of the derivative of this same variance when no aging is considered. However, the peak gives the origin a double peak and subsequent discontinuity, and there is no consensus about the localization of the critical parameter. § CONCLUSIONS We conducted a study on aging phenomena by examining the scaling behavior of the total correlation of magnetization. Our findings reveal an important deviation in the scaling of the second moment of magnetization. Moreover, we demonstrate that when considering the initial magnetization distributed according to the Gaussian distribution expected at the time we hypothetically started after interrupting the time-dependent simulations, we need to scale the time appropriately to capture the correlation obtained with this initial time. Furthermore, we present an intriguing analysis of random matrices, which sheds light on the expected spectra of matrices constructed from the time evolutions of magnetization during aging. This method exhibits high sensitivity and demonstrates how aging can impact the determination of the critical temperature. Overall, our study provides valuable insights into the effects of aging on magnetization dynamics and highlights the importance of accounting for initial conditions and scaling considerations in such systems. Acknowledgements R. da Silva thanks CNPq for financial support under grant numbers . 99 JansenShort-time H. K. Janssen, B. Schaub, B. Schmittmann, Z. Phys. B: Condens. Matter 73, 539 (1989) Huse D. A. Huse. Phys. Rev. B 40, 304 (1989) Pleimling M. Henkel, M. Pleimling, Non-equilibrium Phase Transitions, Vol. 2: Ageing and Dynamical Scaling far from Equilibrium, Springer, Dordrecht (2010) TomeOliveira1998 T. Tome, M. J. de Oliveira, Phys. Rev. E 58, 4242 (1998) Zheng B. Zheng, Int. J. Mod. Phys. B 12, 1419 (1998) Hase2010 M. Hase, T. Tome, M. J. de Oliveira, Phys. Rev. E 82, 011133 (2010) RMT2023 R. da Silva, Int. J. Mod. Phys. C 34, 2350061 (2023) RMT2-2023 R. da Silva, H. C. Fernandes, E. Venites Filho, S. D. Prado, J. R. Drugowich de Felicio, Braz. J. Phys. 53, 80 (2023) Wishart J. Wishart, Biometrika 20A, 32 (1928) Wishart2 Vinayak, T. H. Seligman, AIP Conf. Proc. 1575, 196 (2014) Potts R. da Silva, E. Venites, S. D. Prado, J. R. Drugowich de Felicio, https://doi.org/10.48550/arXiv.2302.07990 (2023) Silvatheta R. da Silva, N. A. Alves, and J. R. Drugowich de Felicio, Phys. Rev. E 66, 026130 (2002) § CREDIT AUTHORSHIP CONTRIBUTION STATEMENT All authors conceived and designed the analysis, performed formal analysis, wrote the paper, elaborated the algorithms, analysed the results, and reviewed the manuscript. § DECLARATION OF COMPETING INTEREST The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper § ACKNOWLEDGEMENTS R. da Silva would like to thank CNPq for financial support under grant number 304575/2022-4.
http://arxiv.org/abs/2307.02756v1
20230706032931
On the detection of the electromagnetic counterparts from lensed gravitational wave events by binary neutron star mergers
[ "Hao Ma", "Youjun Lu", "Xiao Guo", "Siqi Zhang", "Qingbo Chu" ]
astro-ph.HE
[ "astro-ph.HE", "astro-ph.CO" ]
firstpage–lastpage Wireless Multi-Agent Generative AI: From Connected Intelligence to Collective Intelligence Hang Zou, Qiyang Zhao, Lina Bariah, Mehdi Bennis, and Mérouane Debbah. August 1, 2023 ========================================================================================== Future ground-based gravitational wave (GW) detectors, i.e., Einstein telescope (ET) and Cosmic Explorer (CE), are expected to detect a significant number of lensed binary neutron star (BNS) mergers, which may provide a unique tool to probe cosmology. In this paper, we investigate the detectability of the optical/infrared electromagnetic (EM) counterparts (kilonovae/afterglows) from these lensed BNS mergers by future GW detectors and EM telescopes using simple kilonova, afterglow, and lens models. ET and CE are expected to detect ∼5.32^+26.1_-5.10 and 67.3^+332_-64.7 lensed BNS mergers per year. We find that the EM counterparts associated with all these mergers will be detectable by an all sky-survey in the H-band with the limiting magnitude m_lim≳27, while the detectable fraction is ≲0.4% in the g-/z-band if with m_lim≲24. Generally it is more efficient to search the lensed EM counterparts by adopting the infrared bands than the optical/UV bands with the same m_lim. Future telescopes like Vera C. Rubin Observatory, China Space Station Telescope, and Euclid can hardly detect the EM counterparts of even one lensed BNS merger. Roman Space Telescope (RST) and James Webb Space Telescope (JWST) have the capability to detect about a few or more such events per year. Moreover, the time delays and separations between the lensed image pairs are typically in the ranges from minutes to months and from 0.1 to 1 arcsec, suggesting that both the GW and EM images of most lensed BNS mergers can be well resolved by not only CE/ET in the time domain but also RST/JWST spatially. gravitational lensing: strong - gravitational waves - (stars:) gamma ray bursts: general - (transients:) neutron star mergers § INTRODUCTION Gravitational wave and multi-messenger astronomy is blossomed after the detection of the binary black hole (BBH) merger GW150914 <cit.> and binary neutron star (BNS) merger GW170817 <cit.>. The laser interferometer gravitational wave observatories (LIGO) and Virgo have already detected at least two BNSs, three neutron star-black hole binaries, and more than eighty BBHs <cit.>. With further upgrade, the Advanced LIGO plus <cit.> and LIGO Voyager <cit.> are expected to detect many more in the near future. The third generation GW detectors, i.e., the Einstein Telescope <cit.> and the Cosmic Explorer <cit.> are expected to detect more than several tens of thousands of BBH and BNS mergers per year, let alone the proposal of moon-based Gravitational-wave Lunar Observatory for Cosmology <cit.>. Among these GW events, about a fraction ∼ 10^-3 - 10^-4 of them are expected to be gravitational lensed <cit.>. Searching and identifying the lensed GW events becomes one of the main goals of GW detection, as they not only are interesting phenomena but also can be used as a unique tool to study the nature of GW <cit.> and probe cosmology <cit.>. It has been proposed that the lensed GW events with independently measured redshifts can be applied to constrain Hubble constant accurately encoded in the “time delay distance” as the time delay can be precisely measured from the GW observations <cit.>. For example, <cit.> have shown that the Hubble constant can be constrained to a level of ≲ 1% precision by using only 10 lensed BNS mergers with known redshifts, and <cit.> also demonstrated such an application with detailed modelling of the lens systems. It is possible to get indirect redshift estimates via the detection of their lensed host galaxies and tightly constrain the Hubble constant <cit.>. However, other cosmological parameters, such as the fraction of dark matter and dark energy, cannot be well constrained due to the degeneracy between the true luminosity distances and the magnification factors. If the exact locations of these lensed GW events in their hosts can be determined, the magnification factor of each lensed images can be obtained by the reconstruction of the lens and thus the true luminosity distance can be obtained. In such a case, the time delay distance and the luminosity distance can be applied simultaneously, which enables tighter constraints on other cosmological parameters <cit.>. One may measure the exact locations and also the redshifts of the lensed GW events by directly detect their lensed EM counterparts. This is possible for those compact binary mergers with a neutron star component, which lead to EM counterparts like kilonovae <cit.> and afterglow signals from short-duration gamma-ray burst (SGRB) <cit.>. These EM signals, if detected, could provide much more precise localization of the lensed events and thus break the degeneracy in measuring cosmological parameters by using time delays. However, most previous studies are focused on estimating the detection rate of the lensed GW events but seldom consider the detectability of their EM counterparts <cit.>. In this paper, we investigate the detectability of the EM counterparts of the lensed BNS mergers detected by GW detectors for different limiting magnitude conditions, mainly focusing on the kilonovae and SGRB afterglows. Furthermore, we estimate the detection rate of those events of which both the GW and the EM signals can be detected, by future telescopes, such as the China Space Station Telescope <cit.>, the Vera C. Rubin Observatory <cit.>, the Nancy Grace Roman Space Telescope <cit.>, the Euclid <cit.>, and the James Webb Space Telescope (JWST)[<https://www.jwst.nasa.gov/>]. The paper is organized as follows. In Section <ref>, we introduce the methodology for quantitative estimates of gravitational lensed GW events and their associated kilonova and afterglow phenomena. Our results are presented in Section <ref>. Conclusions and discussions are given in Section <ref>. Throughout this paper, we adopt the concordance ΛCDM cosmology model with (Ω_ m, Ω_Λ)=(0.3,0.7) and the Hubble constant H_0= 70.0 km s^-1 Mpc^-1. § METHODOLOGY In this section, we first introduce the lens statistics by adopting the singular isothermal ellipsoid (SIE) lens model with external shear (Section <ref>) <cit.>. Then, we describe our method to estimate the detection rate for the lensed BNS merger GW events (Section <ref>), their associated kilonovae and afterglows signals (Section <ref>), and both the GW events and the EM signals (Section <ref>). Note here that we only consider the strong gravitational lensing events in the geometrical optics limit as the diffraction effect can be safely ignored in the LIGO band (10 ∼ 10^3Hz) for galaxy lenses <cit.>. §.§ Lensing Statistics We denote the GW event rate detected by a GW detector as Φ(ϱ,z_ s) with the signal-to-noise ratio (SNR) in the range from ϱ to ϱ+dϱ and redshift from z_ s to z_ s+dz_ s. For simplicity, we assume all the BNS mergers have two types of EM counterparts kilonovae and afterglows. Thus event rate considering GW+EM joint detection is denoted as Ψ(ϱ,m_ _EM,z_ s) similar to Φ(ϱ,z_ s), where m__EM is the apparent magnitude for individual EM counterpart. The strong lensing is mainly caused by the intervening early-type galaxies <cit.>, of which the number density distribution can be denoted as dN/dσ_v within the velocity dispersion range from σ_v to σ_v+dσ_v. We adopt the power-law evolution model to describe the redshift evolution of the distribution of dN/dσ_v described by the modified Schechter function <cit.> d N/ dσ_ v=ϕ_z(σ_ v/σ_z)^αexp[-(σ_ v/σ_z)^β] β/Γ(α / β)1/σ_ v and ϕ_z = ϕ_*( 1+z_ l)^κ_n; σ_z = σ_*( 1+z_ l)^κ_ v, where (ϕ_*, σ_*, α, β)=(8.0 × 10^-3h^3 Mpc^-3, 161 km s^-1, 2.32, 2.67) are the fitting results given by <cit.>, and two redshift evolution parameters κ_n=-1.18 and κ_v=0.18 are from <cit.>. In the geometrical optics regime, the probability that a GW event and its EM counterparts are lensed can be characterised by the optical depth. Similar to <cit.>, the optical depth for a GW event detected by a GW detector with SNR of ϱ at redshift z_ s can be estimated as τ__ GW(ϱ, z_ s) = 1/4 π∫_0^z_ s d V ∫ d σ_ vd N/d σ_ v∫ d q p(q) × ∬ d γ p(γ, θ_γ) ∫ d μ A p(μ)/√(μ)Φ(ϱ / √(μ), z_ s)/Φ(ϱ, z_ s). Similarly at any given band, the optical depth for a GW event and its EM counterparts detected by both a GW detector with SNR of ϱ and an EM telescope with an apparent magnitude of m_ _EM at redshift z_ s is given by τ_ _GW+EM (ϱ,m_ _EM,z_ s) = 1/4 π∫_0^z_ s d V ∫ d σ_vd N/d σ_v∫ dq p(q) × ∬ d γ p(γ, θ_γ) ∫ d μ A p(μ)/√(μ)Ψ(ϱ/√(μ),m_ _EM+2.5 logμ,z_ s))/Ψ(ϱ,m_ _EM,z_ s). In the above two equations, d V is the comoving volume element with redshift in the range z → z + dz, p(q) and p(γ,θ_γ) represent the probability distributions of the axis ratio q and the two dimensional external shear (γ,θ_γ), which describe the lens morphology and external environment near the line of sight <cit.>, μ represents the magnification of the lensed images, with which the GW SNR is magnified by a factor √(μ), and the EM counterparts magnitude is brightened by 2.5 logμ <cit.>, and p(μ) represents the probability distribution of μ, A represents the cross-section of the lens determined by the assembles of source locations having multiple lensed images. The cross-section is related to the lens galaxy velocity dispersion, the redshifts of lens and source, axis ratio and external shear. It is more convenient to consider the dimensionless version of cross-section Ã(q,γ)=A/θ_E^2, with which the parameters of lens velocity dispersion and lens and source redshift are separated out into the angular Einstein radius given by θ_E = 4 π (σ_ v/c)^2 (D_ ls/D_ s). D_ ls and D_ s denote the angular diameter distance between the lens and the source and between the observer and the source. §.§ Models for gravitational waves In the case of GW, we consider the most general model which is the Newtonian approximation described by quadrupolar formula. Under such an assumption, Φ(ϱ, z) is <cit.> Φ(ϱ,z )=∫ d ℳ_0d V/d zℛ_ mrg(ℳ_0 ; z)/(1+z) P_ϱ(ϱ | z, ℳ_0). Here ℳ_0 is the intrinsic chirp mass, ℛ_ mrg(ℳ_0;z) is the merger rate density of GW source with intrinsic chirp mass ℳ_0→ℳ_0+ dℳ_0 at redshift z, and P_ϱ(ϱ|z,ℳ_0) represents the probability distribution of a GW source with chirp mass ℳ_0 at redshift z detected with a SNR of ϱ as <cit.> P_ϱ(ϱ | z ,ℳ_0) =.P_Θ(Θ(ϱ)) ∂Θ/∂ϱ|_ℳ_0, z, and Θ(ϱ) = ϱ/8D_ L(z)/R_0(1.2 M_⊙/ℳ_0(1+z))^5 / 61/√(ζ (f_max)). In the above equations, Θ stands for the angular orientation function and it depends on the relative orientation of the detector and the source, R_0 is characteristic distance for a given GW detector which depends on the detector’s noise power spectral density S_n(f), and it describes a detector’s detection capability, D_ L(z) is the luminosity distance at redshift z, and ζ (f_max) represents the overlap between GW signal and effective bandwidth of detectors, where we adopt ζ (f_max)=1 for simplicity <cit.>. In this work, we consider five future GW detectors, i.e., LIGO A+, LIGO Voyager, ET, CE, and GLOC[Sensitivity curve data is from <https://dcc.ligo.org/LIGO-T1800042/public> for LIGO A+, <https://dcc-lho.ligo.org/LIGO-T1500293/public> for LIGO Voyager, <http://www.et-gw.eu/> for ET-D design <cit.>, <https://cosmicexplorer.org/> for CE Stage-2 phase <cit.> (unless otherwise specified, CE in this work all refers to CE Stage-2), and < https://doi.org/10.5281/zenodo.3948466> for GLOC.], respectively. With the designed sensitivity curves of these detectors, we have R_0= 194 Mpc, 477 Mpc, 1586 Mpc, 4034 Mpc, and 5688 Mpc, correspondingly. According to <cit.>, Θ is a function of ϱ and its probability distribution P(Θ) can be well approximated by a piecewise function, i.e., P_Θ(Θ)=5 Θ(4-Θ)^3 /256 when 0<Θ<4, and P_Θ(Θ)=0 otherwise. The intrinsic merger rate density R_ mrg(ℳ_0;z) can be estimated by combining the population synthesis models for binary stellar evolution (BSE) and the models of cosmological galaxy formation and evolution <cit.>. We adopt the results on the BNS merger rate obtained by implementing the BSE model “α 10.kbβ 0.9” into the galaxy formation and evolution model Millennium-II in <cit.>, which can well match the distribution of the observed BNSs in the Milky Way. We re-scale this predicted merger rate density evolution to make it the same as the local merger rate density determined by the LIGO/Virgo observations, i.e., 320 Gpc^-3yr^-1 <cit.>. The latest constraint from GWTC-3 <cit.> shows the local BNS merger rate should be in the range from 13 to 1900 Gpc^-3yr^-1, and we take these numbers as the indicators of the uncertainties in the estimate of the local BNS merger rate density, and correspondingly it leads to an uncertainty in our estimates for the GW and EM detection rates of the BNS mergers. Note here that the general trend for the merger rate density evolution may be also different in different models <cit.>, which may slightly affect our estimates below for the detection of lensed GW and EM signals. For simplicity, however, we ignore this uncertainty and take it as a secondary effect. §.§ Kilonovae and afterglows associated with BNS mergers It is already known that kilonovae are promising EM counterparts of BNS mergers and can be detected by ground-based or spaceborne telescopes <cit.>. Comparing with the beamed GRB and its afterglow (see later description in this subsection), kilonova powered by r-process is relatively isotropic and thus may be the most promising detectable EM counterpart of the BNS merger <cit.>. The light curves (LCs) of the BNS merger GW170817 can be well fit by a two-component or three-component kilonova model <cit.>. More complicated kilonova models also proposed to interpret the detailed multiband observations of GW170817 <cit.>. In this paper, we adopt the isotropic two-component kilonova model given in <cit.> to generate the LCs at different bands for simplicity. This model assumes only the `red' and `blue' components in kilonova emissions, in which `red' and `blue' are determined by the abundance of lanthanide. The `red' component corresponds to lanthanide-rich ejecta with a relatively higher opacity, while the `blue' component corresponds to lanthanide-poor ejecta with a lower opacity. The emissions from the `red' and `blue' components peak at relatively redder and bluer bands, respectively. Each of the component is mainly described by four parameters, i.e., ejecta mass (M_ ej), ejecta velocity (v_ ej), opacity (κ_ ej), and the temperature floor (T_ f). Figure <ref> shows the H-, z-, and g-band[Unless otherwise specified, the H-band is chosen from the Euclid, z- and g-bands are chosen from CSST in this work. All the filter data are provided by the Spanish Virtual Observatory (SVO) Filter Profile Service <cit.>.] LCs of a kilonova like GW170817 located at several different redshifts, obtained by adopting the best-fit two-component model given in <cit.>. The shape of the observed LCs of a kilonova, same as GW170817 but locating at redshift z=1 or 2, are different from those of GW170817 because of the time dilation and the K-correction. We define the peak time T_ p as the time for a kilonova at any specific band reaching its peak magnitude after the merger, and the peak duration of that LC Δ T_ p-0.5 as the duration of the kilonovae LCs with magnitude from the peak magnitude m_ p to m_ p+0.5. Apparently, T_ p and Δ T_ p-0.5 change with redshift as a combined effect of the time dilation and the K-correction. Figure <ref> shows T_ p and Δ T_ p-0.5 for the LCs of kilonovae at different redshift. For illustration, we only show those for the H-, z-, and g-band LCs, respectively. As seen from this figure, T_ p≳ 1 day and Δ T_ p-0.5≳ 1 day for all the LCs at H-, z-, and g-bands, and the LCs in the redder bands have relatively larger T_ p and larger Δ T_ p-0.5 comparing with those in the bluer bands. These suggest that the time should be sufficient for telescopes to prepare for searching the appearance of kilonovae of BNS mergers at its peak luminosities. Adopting a much shorter response time ( ≪ 1 day) may not significantly improve the detectability of the kilonovae, the main concern of this paper[One should note here that a shorter response time is extremely important for studying the underlying physical processes of kilonova and the nature of neutron stars, since it can be applied on distinguishing different models for interpreting the early blue emission of kilonovae <cit.>.]. In addition, due to the cosmological redshift effect, the searching of distant kilonovae (z ≳ 1) which is the majority of the lensed BNSs (see Fig. <ref>), can be more efficient by using the redder filters than the optical filters. Note here that for those kilonovae in the local universe, the optical filters have unparalleled advantages in observing them, since the infrared and near-infrared filters have a much lower throughput and multi-optical-band observations can be used to separate kilonovae photometrically from other transients <cit.>. In principle, one may design the search strategy of each individual kilonovae according to its expected LCs. Without loss of generality, we assume that the kilonova can always be searched for during the period with magnitude no fainter than the peak magnitudes by 0.5 mag. For the afterglow, we assume a simple Gaussian jet model by using the package , developed by <cit.>, to estimate the LCs. According to <cit.>, the jet opening angle θ_j (in unit of degree) of the afterglow follows a log-normal distribution ∝ (1 /σθ_j) exp[-(lnθ_j-μ)^2/ (2σ^2)], with μ=1.742 and σ=0.916. Note that the afterglow of GW170817 can also be well reconstructed by the Gaussian jet model and the best-fit value for θ_j is 3.27^∘, consistent with the log-normal distribution given by <cit.>. For observers viewing the afterglow at off-axis, the received afterglow emission drops rapidly when the viewing angle θ_ obs increase to larger than a truncation angle θ_w, and the kilonova radiation normally dominates when θ_ obs>θ_w. In our calculations, we fix θ_w=2θ_j according to <cit.>. Other parameters are fixed to the best-fit values obtained by fitting the late-time broad-band EM observations of GW170817 to the Gaussian jet model <cit.>. These parameters are the on-axis equivalent isotropic energy log E_0/ erg=52.73, the number density of the interstellar medium n_0=10^-3.8 cm^-3, the power-law index of accelerated shock p=2.155, the magnetic field energy fraction ϵ_e=10^-1.51 and the accelerated electron energy fraction ϵ_B=10^-3.20. Figure <ref> illustrates some example cases of the afterglow LCs at the H-, z-, and g-band respectively, in which time dilation and K-correction are taking into consideration. As seen from this figure, the afterglow magnitudes decrease with elapsing time t at all the three bands. For the bright cases with small viewing angle, the luminosities decrease rapidly monotonically at small t, which suggests that a short response time (i.e., the time between the searching of the afterglow and the GW alert of the BNS merger) is critical for searching afterglows. It is easier to detect the afterglow if the response time is shorter, comparing with the kilonova case in which a short response time (≪ 1 day) of observation is corresponding to fainter magnitude since the peaks of kilonova LCs normally emerge at T_ p≳ 1 day (see Figure <ref>). The recent transient searches triggered by GW alerts have reported a mean response time of 9.90 hr <cit.> and of 1.5 hr <cit.> during the first half of the LVC O3 run. Thus, we assume two response time cases of 10 hr and 1.0 hr in the following calculations, denoting the typical time period between the detection of the GW from the BNS merger and the EM counterparts searching time, and calculate the afterglow magnitude at this time for different bands. Note again that the detectability of the EM counterparts is the main concern of this paper. Adopting a much short or long response time may not significantly improve the detectability of kilonova (see Fig. <ref>) as the peak luminosities emerge at 1 day ≲ T_ p≲ 4 day (see Fig. <ref>), though the observations in the first few hours and/or the UV band are indeed important for constraining the kilonova model and the nature of neutron stars. Therefore, we assume that the searching observations for kilonovae can always be performed in a period around the peaks of the light curves for a response time of 10 hr. We also set the magnitude of both kilonovae and afterglows at 1.0 hr for the case with response time of 1.0 hr and compare those results obtained from this case with the above one. The luminosity and its evolution at any given band of an afterglow is sensitive to the viewing angle θ_ obs of the binaries (see Figure <ref>), which is different from the nearly isotropic kilonova emission. The afterglow magnitudes decrease sharply with increasing viewing angle θ_ obs, especially at θ_ obs > θ_ w, if the elapse time is less than 10 days since the merger. Note that the BNS mergers detected by the GW detectors have the viewing angle distribution of P_ det(θ_ obs) = 0.076(1 + 6 cos^2θ_ obs + cos^4θ_ obs)^3/2sinθ_ obs, due to the GW radiation pattern <cit.>. Therefore, we set the view angles of afterglows following this probability distribution for the calculations of the mock BNS mergers in Section <ref>. §.§ Joint detection of lensed GW+EM events For detection of the EM counterparts of GW events, i.e., kilonovae and afterglows, for demonstration, we consider some future ground-based/spaceborne telescopes, including Rubin, CSST, Euclid, RST, and JWST. All these optical/infrared telescopes will have their first observations in this decade and may be still the premier observatories of the next decade. For each telescope we pick one specific bandpass filter, which is the z filter of Rubin, the z filter of CSST, the H filter of Euclid, the filter H158 of RST, and the filter F150W of JWST. We note here that all these future telescopes are loaded more than one bandpass filters that cover a wide-range of wavelength. For instance, JWST NIRCam offers 29 bandpass filters in the wavelength range 0.6-5.0μ m. Our results below only represent the cases of the adopted specific bandpass filters. For future realistic EM counterpart searches for BNS mergers, it depends on which bandpass filters that are finally get involved. In addition, we also consider the K-correction for all the EM counterparts at any given redshift. Since the lensed events mainly peak around z_ s∼ 1-2, the K-correction leads to the shifts of the intrinsic optical band emission of kilonova (probably the peak of the spectral energy distribution) to redder bands. The red band luminosity of afterglow is also higher than the blue band one (see Figure <ref>). Therefore, adopting the infrared filters may be more efficient in searching for the lensed EM counterparts (see Section <ref>). This is the primary reason that we adopt the five bandpass filters above in the near-infrared band. From the above GW and EM models, the total event rate of BNS mergers with the GW SNR ϱ and the apparent magnitude of the kilonovae m_ KN or the afterglow m_ AG can be generated by Ψ(ϱ,m_ KN,z_ s) = ∫Φ(ϱ, z_ s) p(m_ KN | x_ KN)p(x_ KN) d x_ KN, for kilonovae, or Ψ(ϱ,m_ AG,z_ s) = ∫Φ(ϱ, z_ s) p(m_ AG | x_ AG)p(x_ AG) d x_ AG, for afterglow. Here p(m_ KN | x_ KN) and p(m_ AG | x_ AG) are the probability distributions of kilonova and afterglow apparent magnitudes at an observation time t_ obs, in which x_ KN={M_ ej^ blue,v_ ej^ blue,κ_ ej^ blue,T_ f^ blue,M_ ej^ red,v_ ej^ red,κ_ ej^ red,T_ f^ red,z_ s,t_ obs} and x_ AG={θ_ obs, θ_j,E_0,n_0,p,ϵ_e,ϵ_B,z_ s,t_ obs} are the model parameters of kilonova and afterglow. The superscripts `red' and `blue' denote the red- and blue-components of the kilonova ejecta, respectively. In principle, one could obtain these two probability distributions by adopting detailed kilonova and afterglow models, by considering the dynamics, radiation processes, geometric configuration, and environment of each BNS merger with known physical properties. For simplicity, we assume in this paper that all the kilonovae have the same x_ KN as that for GW170817, constrained by GW and multi-band EM observations as described above in Section <ref>, and the kilonova radiation is close to isotropic. Therefore, m_ KN is only a function of redshift due to the K-correction, and p(m_ KN | x_ KN) p(x_ KN) is taken as a delta function for kilonovae at any given redshift. We also assume that all afterglows have the same x_ AG obtained by late-time broad-band EM observations of GW170817 as described above, except that the viewing angles θ_ obs and jet opening angles θ_j are different. Therefore, m_ AG is only a function of θ_ obs, θ_j, and redshift z_ s. Similarly, p(m_ AG | x_ AG) p(x_ AG) is also taken as a delta function at given θ_ obs,θ_j, and z_ s. In our following calculations, Ψ(ϱ,m_ KN,z_ s) and Ψ(ϱ,m_ AG,z_ s) are realized by sampling m_ KN and m_ AG using the kilonova and afterglow models described in Section <ref> for sources at different redshift with different viewing angles and response time conditions. We assume that the afterglow signal is not correlated with the kilonova signal, and the kilonova is independent of the viewing angle θ_ obs. The lensed event rates for GW detection and GW+EM joint detection can be obtained by combining equations (<ref>) and (<ref>), i.e., d Ṅ_L,GW/d z_ s = ∫_ϱ_0^∞ d ϱ Φ(ϱ, z_ s) τ__GW = g(z_ s) ∫_ϱ_0^∞ d ϱ∫ dq p(q) ∬ d γ p(γ, θ_γ) ×∬d u/√(μ) Φ(ϱ / √(μ) , z_ s), d Ṅ_ L,GW+EM/d z_ s = ∫_ϱ_0^∞ d ϱ∫_-∞^m_lim d m_ _EMΨ(ϱ,m_ _EM,z_ s) τ_ GW+EM = g(z_ s) ∫_ϱ_0^∞ d ϱ∫_-∞^m_lim d m_ _EM∫ dq p(q) ∬ d γ p(γ, θ_γ) ∬d u/√(μ)Ψ(ϱ / √(μ), m_ _EM+2.5logμ ,z_ s). In the above two equations, g(z_ s) = 1/4 π∫_0^z_ s d V ∫ d σ_vd N/d σ_vθ_E^2, u is the angular position of the source in the source plane and is limited to within the cross-section region, the axis ratio q follows a normal distribution truncated at 0.2 and 1.0 with a mean of 0.7 and a standard deviation of 0.16 <cit.>, the external shear radial component γ is well fitted with the logarithm normal distribution with mean of 0.05 and standard deviation of 0.2 dex, and tangential component θ_γ is set to be random in the range from 0 to π <cit.>. If set τ = 1, the first lines of the above two equations give the intrinsic detectable event rates without considering the lensing effect. The EM counterpart magnitude m_ _EM in equation (<ref>) can be the magnitude of either kilonova (`KN') or afterglow (`AG'). We also consider the rate for those cases that either KN or AG can be detected, i.e., d Ṅ_L,AG/KN/d z_ s = d Ṅ_L,GW+KN/d z_ s+d Ṅ_L,GW+AG/d z_ s-d Ṅ_L,GW+KN+AG/d z_ s, where the subscript `AG/KN' means either AG or KN is detected for those lensed BNS GW events,`GW+KN' and `GW+AG' represent that the KN and the AG signals are detected for the lensed GW events, respectively, and `GW+KN+AG' represents that all the GW, KN, and AG signals can be detected for the same event. It is necessary to set the criteria for the detection of an lensed event with GW and EM signals for the estimation of those event rates. Different lensed images have different light paths and different arrival time. Once the first two arrived lensed GW signals of a BNS merger are detected, the event may be identified as a potential lensed system by analyzing their GW signals. Many recent works have assessed the feasibility of this pre-selection of lensed pairs by applying their sky localization, time delay, magnification ratio, and phase shift, etc <cit.>. In the ET/CE era, the network of GW detectors can provide a medium localization accuracy around 10 deg^2 <cit.>. With this accuracy, random searches by general purpose or even some deep field sky survey telescopes may be not so efficient, due to the limitation of their Field of view (FoV). For example, the FoV of CSST, Euclid, RST, and JWST are 1.1 deg^2, 0.53 deg^2, 0.28 deg^2, and 9.7 arcmin^2, respectively, which are more than tens to thousands times smaller than the typical localization area of GW events. It is almost impossible to use the general purpose telescopes, such as JWST, to scan the whole localization area to find the associated transients. However, the host galaxies of some lensed GW sources may be identified by matching the properties of the lensed GW events with those of the lensed hosts within the sky area <cit.>, with which the exact hosts for the EM counterparts can be obtained. Therefore, we ignore the FoV limitation and assume that the lensed host location of the lensed BNS merger events can be known at prior, thus both the deep field survey and general purpose telescopes can be considered. The pre-selection processes for the lensed GW events and its lensed host galaxies may lead to some uncertainties in the identification of the EM counterparts. For example, the processing time of the potential lensed system identification and its lensed host galaxy matching is a crucial factor and is supposed to be as short as possible. Other factors are the fraction of the lensed host galaxies that can be identifiable by the adopted survey telescope (f_ Host) and the success rate of the lensed host galaxy matching among all the candidates in the potential localization (P_ Host). For the general discussion in Section <ref>, we assume that f_ Host=100% and P_ Host=100% at first. However, for the prediction of future specific telescopes, <cit.> and <cit.> estimated that the fraction of those lensed galaxies detected by sky surveys (e,g,, Euclid, CSST, etc.) is about 20-50%. Therefore, we adopt a moderate value of f_ Host=30% as this fraction in this paper. <cit.> found that the lensed GW signals and its host galaxy can be successfully matched within 10 deg^2 sky area for the 3rd generation GW detectors, while <cit.> found the successful matching fraction could be as small as ∼20%. Thus we adopt the success rate of matching P_ Host=100% as the optimistic case for later analysis if not otherwise stated. We also consider P_ Host=20% as the pessimistic case. The arrival time sequence of the lensed images is important for the settings of the `detection criterion'. Assuming the SIE lens model, there will be two categories of lensed phenomenon, one with double images, and another with quadruple images. For the double-image cases, the second arrived images appear to be the fainter ones. Whereas for the quadruple-image cases, the first arrived images appear to be the third brightest ones, and the following two images are the first two brightest images <cit.>. We fix the fainter images to be the second arrived ones for the double-image cases, while fix the brightest images as the second arrived ones for the quadruple-image cases. Although the accurate arrival time sequence can be uncertain and it depends on the configuration of the lens system, our assumption is appropriate in general. Below we consider two set of criteria for the identification of a lensed GW+EM event, i.e., * Regular criterion: 1) The first two arrived GW signals of the BNS merger event can be detected, which means that the fainter image for the double-image cases or the third brightest ones for the quadruple-image case has the SNR above the detection threshold ϱ_0. This criterion is also consistent with the one frequently adopted in the statistical studies of GW lensed events <cit.>; 2) The EM counterpart (either AG or KN) of the second arrived GW signal exceeds the detection magnitude threshold of the specific EM telescopes considered in this paper. With this criterion, at least one EM image of the lensed BNS merger event can be detected. * Conservative criterion: 1) The faintest GW signals for both the double-image and quadruple-image cases have the SNR above the detection threshold ϱ_0. 2) The faintest EM images for either the double-image or quadruple-image cases exceeds the detection magnitude threshold of the specific EM telescopes considered in this paper. Apparently, the conservative criterion is different from the regular criterion by only changing detection threshold for the quadruple-image cases. This means that the higher ratio of the cases with quadruple images to those with double images, the smaller detectable lensed event rate by adopting the conservative criterion than those estimated by adopting the regular criterion (see Table <ref>). With the criterion for detectable lensed events defined above, the detectable rate of lensed events can be estimated according to equations (<ref>) and (<ref>), which can be done by using the Monte Carlo method. Through solving the lens equation for each realized lens system, we can obtain detailed properties of their images, including the magnification factors, locations, arrival time, etc. <cit.>. § RESULTS In this section, we present our results of lensed GW and its EM counterparts. We first predict the lensed GW event rate for five future GW detectors. For different limiting magnitude conditions, we calculate the detection efficiency (theoretical all sky lensed event rate), and also present the lensed GW+EM event rates of five specific upcoming telescopes (Section <ref>). Then redshift distributions and magnification distributions are demonstrated (Section <ref>). Finally from the perspective of time delays and angular separations of lensed images, we further explore the detection capabilities for future detectors and telescopes (Section <ref>). §.§ Lensed event rate We estimate the lensed event rates for both GW and EM signals from BNS mergers by assuming different conditions and different identification criteria. Table <ref> list the expected lensed event rates for the GW signals of BNS mergers that can be detected by different GW detectors, i.e., LIGO A+, LIGO Voyager, ET, CE, and GLOC, respectively. Apparently, ET, CE, and GLOC can detect ∼ 10^4 - 10^5 BNS mergers per year. Within this enormous amount of BNS mergers, it is promising to identify some lensed events by applying their waveform information. Adopting the regular criterion, for example, the total detectable lensed event rates of GW signals are expected to be 23.6/5.32/2.20 per year for ET, and 139/67.3/42.5 per year for CE if the SNR threshold is set as ϱ_0=5/8/10. Adopting the conservative criterion, the total detectable lensed event rates decrease to 20.7/4.10/1.57 per year for ET and 136/63.7/39.0 per year for CE by setting ϱ_0=5/8/10, because less events with quadruple images rates can be detected. The detectable event rate of BNS mergers is 218/53.7/27.6 per year for LIGO A+ and 3220/787/404 per year for LIGO Voyager corresponding to ϱ_0=5/8/10. However, the lensing probability for GW sources detected by LIGO A+ is 1-2 order of magnitude lower than those by the other detectors, which leads to lensed event rate in the range of 10^-3 - 10^-5 yr^-1. This is mainly because the probability of sources being lensed by foreground galaxies is intrinsically lower at lower redshift. LIGO Voyager has a larger detection rate ∼ 10^-1 - 10^-3 yr^-1 than LIGO A+ does, which means that LIGO Voyager may be able to detect a lensed BNS within a period of ten years observation. GLOC, on the contrary, has the highest detection rate of the lensed BNS mergers, i.e., 201/117/82.9 and 199/114/79.5 per year, if adopting the regular and the conservative criterion by setting ϱ_0=5/8/10. Figure <ref> shows the expected detectable rate of the lensed GW and EM events as a function of the limiting magnitude of the telescope using the H-, z-, and g-band filters, respectively. We also correspondingly show the fraction of those with identifiable lensed hosts among the lensed GW events in the figure. Here we adopt the regular criterion to identify the lensed GW and EM events and only show the results for ET and CE for illustration. Apparently the rate of jointly detectable lensed GW+EM events increases with increasing limiting apparent magnitude of the adopted filter until it saturates. If the searching telescopes can reach sufficiently faint magnitude, all the kilonovae from the lensed GW events can be detected (see the cyan line in the left panel for H-band). However, only a fraction (∼ 20%) of the afterglows associated with those lensed GW events can be detected (see the color lines in the middle panel) even if the limiting magnitude is very faint, which is mainly caused by that the afterglow emission is significantly anisotropic. Only those afterglows with a viewing angle less than the truncation angle θ_ w can be detected in the optical/infrared band, while kilonovae can be detected with any viewing angle. The rate for jointly detectable lensed GW+EM events by adopting a redder filter is larger than that by adopting a bluer filter with the same m_ lim. This is attributed to: 1) the intrinsic kilonova emission normally peaks at the optical band; 2) the afterglow intrinsically emits more in the redder band than in the bluer band at the rest frame time ≳ 10^4 s <cit.>; and 3) the cosmological redshift effect causes the the optical emission shifting to the infrared band (as the redshift of lensed GW+EM events normally peaking around z_ s∼ 1-2 (see Section <ref>). Once a lensed BNS merger event is identified by a GW observatory, there are two ways to search for its EM counterparts: one is to search the whole localization area by a survey telescope with a given limiting magnitude; another is to directly observe the lensed host galaxy of the merger using a telescope with deep enough limiting magnitude if its host can be identified at prior by other surveys, which is mentioned above. Hereafter we only consider the second way, which is more efficient in searching for the EM counterparts. There are several factors that can affect the detection rate of the EM counterparts. First, only a fraction of the whole sky can be surveyed by a survey telescope. Second, the available sky region for a telescope at a given time may cover only a fraction of, or even not cover, the location of a transient phenomenon. Third, only a fraction of the lensed host galaxies can be identified by the survey telescope(s). Fourth, within the localization area given by GW detection lie many potential lensed host galaxies, which makes matching the lensed GW event and its host galaxy uncertain. Therefore, the rate for the EM detectable lensed events for a specific telescope should be down-scaled by these fractions, compared with the three colored lines shown in each panel of Figure <ref>, i.e., Ṅ_ _L,GW+EM,T = Ṅ_ _L,GW+EM·ΔΩ/4 π· f_ OL,T· f_ Host· P_ Host. Here ΔΩ denotes the sky coverage of the adopted survey telescope, f_ OL,T denotes the overlap ratio of the available sky region for the adopted searching telescope at any given time to the whole survey area for the lensed host galaxies, f_ Host represents the fraction of the lensed host galaxies that can be identified by the adopted survey telescope, and P_ Host represents the probability that the real lensed host galaxies can be well matched among the candidates within the localization of lensed GWs given by GW detection (success rate of matching). For demonstration, we consider some current/future (survey) telescopes like Rubin, Euclid, CSST, RST, and JWST, respectively. Table <ref> lists the limiting magnitude, FoV, angular resolution, sky coverage and field of regard of Rubin, Euclid, CSST, RST, and JWST, respectively. The sky surveys taken by CSST and Euclid have similar sky coverage with ΔΩ∼ 17,500 deg^2 and 15,000 deg^2, respectively (avoiding the light contamination from the Solar System and the Milky Way), and RST will perform a deeper survey but with a smaller sky coverage of ΔΩ∼ 2000 deg^2, overlapping with the survey area of CSST and Euclid. A large sample of lensed galaxies will be found by these surveys, among which some may be the hosts of BNS mergers. We assume the catalogs of lensed galaxies obtained by different surveys can all be used by the searching observations of the EM counterparts, thus we fix ΔΩ=17,500 deg^2 for following rough estimates. The overlap ratio f_ OL,T depends on the exact time of observations. For telescopes located at L2 (JWST, RST, and Euclid), the boresight pointing angle should remain in a specific range to avoid the sun light, within which the area within the circle along the ecliptic meridian is called the field of regard (FOR) and is 39%/59%/11% for JWST/RST/Euclid[See more descriptions about FOR from <https://jwst-docs.stsci.edu/jwst-observatory-characteristics/jwst-observatory-coordinate-system-and-field-of-regard> for JWST, <https://roman.gsfc.nasa.gov/science/field_slew_and_roll.html> for RST, and <cit.> for Euclid.]. In general, the excluded sky survey area composed of ecliptic plane and the galactic plane is broadly uniform in the direction of ecliptic equator. Thus FOR are adopted as the overlap ratio f_ OL,T for JWST, RST, and Euclid. As for Rubin, the overlap ratio f_ OL,T is about 50% <cit.>. CSST has an orbit period about 90 minutes and all sky can be observed in a short time, which means f_ OL,T∼ 100%. As mentioned in Section <ref>, for any given BNS merger GW event found in the sky area surveyed by these telescopes, the fraction of the lensed host galaxies that can be identified by the adopted survey telescope is fixed at f_ Host∼ 30%. As for the last term, success rate of matching the lensed GW among the candidates in the potential sky area is also fixed at P_ Host=100%. We estimate the detection rate of the EM counterparts for lensed BNS mergers by the CSST/Rubin/Euclid/RST/JWST-like telescope according to equation (<ref>). We find that CSST-like/Rubin-like/Euclid-like telescopes are unlikely to detect even a single lensed GW+EM event. CSST-like telescope appears to have the highest detection rate among these three, with 0.0141^+0.0698_-0.0136 yr^-1 because of its relatively higher limiting magnitude m_ lim and overlap ratio f_ OL,T. One may achieve higher limiting magnitude by increasing the exposure time for these three telescopes. If making the limiting magnitude 2 mag deeper than the current planned ones, it appears that Euclid and LSST can still hardly detect one such event even, though CSST may be able to detect one with several years observations (see Fig. <ref>). RST-like and JWST-like telescopes, on the other hand, can detect 3.07^+15.17_-2.95 yr^-1 and 2.81^+13.89_-2.70 yr^-1 EM counterparts of the lensed BNS mergers identified by CE, and this number can be 0.39^+1.94_-0.38 yr^-1 and 0.26^+1.29_-0.25 yr^-1 for those identified by ET. It is promising that RST-/JWST-like telescopes can detect the EM counterparts for a few to several hundreds lensed BNS merger GW events within an observation period of ten years in the era of the 3rd generation GW detectors. Note here that if the success rate of matching the lensed GW among the candidates in the BNS localized area is P_ Host∼ 0.20 <cit.>, then the expected number of events that can be electromagnetically detected may be down scaled by a factor of ∼ 5. Even in this case, our results will still suggest that the EM counterparts of upto ∼ 6-36 lensed BNS mergers can be detected within a detection period of ten years. We adopt the simple kilonova model without considering the anisotropy of its emission, comparing with the anisotropic afterglow model. In reality, kilonova emission should be anisotropic and the observed magnitude depends on the viewing angle. Many factors, such as the geometry of kilonova, compositional inhomogeneity, and wavelength-dependent opacities, can contribute to this anisotropy, of which the efficacy may further depend on the spatial distribution and properties of the lanthanide-rich and lanthanide-poor ejecta material <cit.>. Some recent studies on the anisotropic kilonova found a factor of ∼ 2 - 3 variation of the kilonova brightness at different viewing angles <cit.>, which would introduce a variation of the peak magnitude over different viewing angles no larger than 1 mag. Our rough estimates show that the detectable lensed event rates may decline by a factor of ≲ 2 for those telescopes with high limiting magnitude, e.g., JWST and RST, if assuming the peak magnitudes of all kilonovae are viewed at the angle with the faintest brightness, i.e., 1 mag lower than the simple model we adopted. This suggests that our conclusions will not be significantly changed if the anisotropic kilonova model is considered. We also consider the conservative criterion, i.e., the faintest images of the lensed events can be detected by the joint GW+EM observations. Figure <ref> shows the results on the expected rate for the detectable EM counterparts of the lensed BNS mergers detected by CE as a function of the limiting magnitude m_lim under the conservative criterion, similar to that shown in Figure <ref> by adopting the regular criterion. Comparing with the bottom panels of Figure <ref>, lensed event rate at lower limiting magnitude (m_ lim≲ 25) decline faster than that of higher limiting magnitude, which means that the quadruple-image case fraction increases as the limiting magnitude decrease. This is because, under the regular criterion, the magnification bias of quadruple-image cases is larger than that of double-image cases, since we apply the brightest image of quadruple-image cases (see also Figure <ref>). In addition, we also consider the cases that the response time of afterglow searching is shortened to 1 hr compared with 10 hr under the regular criterion. When the observing angle is small, the afterglow is substantially brighter if the response time can be short, and thus the detection rate of afterglows can be enhanced. The magnitude of kilonovae is also chosen as the magnitude at the phase of 1 hr, which is much ahead of the peak magnitude (see Figure <ref>). Similarly as Figure <ref>, Figure <ref> shows the expected rate of the detectable EM counterparts for the lensed BNS mergers identified by CE as a function of the limiting magnitude m_lim, except a quick response time of 1 hr is adopted. This change of response time enhances the detectability of afterglows, which enables the rate for the detectable afterglows to approach the upper limits at smaller m_ lim (middle panel). Thus the chance of detecting a lensed event at lower limiting magnitude condition increases accordingly. Not surprisingly, detectable lensed event rate of kilonovae at the phase of 1 hr decline dramatically (left panel), which indicates that the advantage of afterglow detection over kilonovae detection at early phase. Note here that the g-band detection of kilonovae at the phase of 1 hr has higher rate over z-band detection, which is caused by the high energy emissions at the early phase of kilonova and can be also seen from Figure <ref>. Among those BNS mergers detected by GLOC, the number of lensed events increases slightly comparing with those by CE (see Table <ref>). Therefore, the expected rate for the detectable EM counterparts of the lensed BNS mergers identified by GLOC also increases accordingly. Since the number of lensed GW events of BNS mergers detected by LIGO A+/Voyager is small (≲ 0.002/0.2 per year), almost no lensed GW+EM signal from BNS merger is expected to be detected by it (see Table <ref>). §.§ Redshift and magnification Figure <ref> shows the redshift probability distributions of those lensed events detected by ET and CE that can be electromagnetically detected in the z- or H-band by an arbitrary survey with limiting magnitude of 24, 26, 28, and 30, respectively. As expected, the peak of the redshift distribution for GW+EM jointly detected events increases with increasing m_ lim. This peak is z ∼ 0.6-0.9/1.2-1.6 for those lensed GW events detected by ET or CE and by an EM telescope with m_ lim=24 at the z-/H-band, while it shifts to z∼ 2.0 for those lensed GW events detected by ET or CE and by an EM telescope with m_ lim=30 at the z-/H-band. Furthermore, the redshift distribution is more extended for the cases with higher m_ lim, and those sources with redshift z>3 can be still detectable. These are simply caused by that the fainter m_ lim, the more distant events can be detected. Comparing the results of searching for the EM signals obtained by using the z-band with those by using the H-band, as seen from Figure <ref>, the H-band filter appears more efficient than the z-band filter in searching for the high redshift events, which illustrates that the redder filter band may be more suitable for searching lensed EM signals partly due to cosmological redshift effect. To be more specific, the spectra of the kilonova associated with GW170817 at 1.5 days peaks around 6,000Å in the optical band <cit.>. For such a kilonova at z=1, the peak of the spectra at the observer's rest frame shifts to 12,000Å due to the cosmological redshift. For those lensed sources with even higher redshift, the cosmological redshift effect will be more pronounced. In addition, CE tends to detect many more high redshift events than ET does simply it is more sensitive than ET, especially at the low frequency end. Figure <ref> shows the probability distribution of the magnification factor (μ) of the second arrived images for those GW and EM jointly detected lensed events. When m_ lim≥ 26 and lensed GW+EM event rates are nearly reaching the upper limit, the distribution of logμ for double-image cases peaks around 0.2-0.5 for both ET and CE (ET-doub and CE-doub panels), and the distribution of logμ for quadruple-image cases peaks around 0.5-0.8 for both ET and CE (ET-quad and CE-quad panels). However, when the limiting magnitude is comparably low (m_ lim = 24), both doubly-imaged and quadruple-image cases prefer higher magnitude factor and logμ peak around 0.5-0.7 and 0.9-1.2. This means that the resulting logμ distribution for those jointly detected events does depend on the limiting magnitude m_ lim set for searches of the EM counterparts, especially when m_ lim is not high and the event rate is not approaching upper limit. We also note here that the logμ distribution extends to logμ <0 (μ <1) for double-image cases, which means that fainter images of double images may be de-magnified for some cases. §.§ Angular separations and time delays Figures <ref> and <ref> show the two-dimensional distributions of angular separations and time delays for different image pairs of those mock lensed GW events with double and quadruple images, respectively. We obtain the mock samples by the following procedure. First, we sample 10^5 detectable double-image cases and 10^5 detectable quadruple-image cases with SNR ϱ > 8 following the same parameter distributions described in Section <ref> and <ref>. Then, we solve the lens equations for each case to obtain the precise location for each lensed signals and calculate the time delays and angular separations between different lensed signals <cit.>. We also list in Table <ref> the fraction of those lensed events, detectable by both CE and an EM telescope (Rubin/CSST/Euclid/RST/JWST) that have the time delays larger than 10^2 s, 10^3 s, 1 hr, and 1.0 day, and the fraction of those lensed events that with angular separation (between the two images of the double-image cases or between i-th and j-th arrived images of the quadruple-image cases) larger than the angular resolution of Rubin, CSST, Euclid, RST, or JWST. All these fractions are obtained by similar procedures as those introduced above but with additional EM conditions (limiting magnitude of each telescope, time delay cutoff, and angular separation cutoff), for which the EM counterparts parameter distributions adopted are the same as those in Section <ref>. We mainly consider the EM detection of the second arrived image but ignore the first image in this paper. The time delay of the second image to the first image (Δ T_12) normally ranges from minutes to months as shown in Figures <ref>, <ref> and Table <ref>. In general, the kilonovae fade away with time at a rate of 0.3-2 mag day^-1 after its peak luminosity <cit.>. For those afterglows with θ_ obs < θ_w, as we can see in Figure <ref>, they also fade away with minor decline (≲ 1 mag day^-1) in LCs. Thus for those cases with Δ T_12 significantly larger than a few days, the EM signals of the first images may have already faded away (see Figure <ref> and <ref>) and be hard to detect after the arrival of the second images. However, a small fraction (∼ 1%-50%) of the double-image cases have Δ T_12≲ 1 day (see Figure <ref> and Table <ref>), for which the EM signal of the first image can be still around its peak luminosity and thus can be also detected simultaneously with the EM signal of the second image. For those quadruple-image cases, the fraction of them having Δ T_12≲ 1 day becomes substantially larger (e.g., ≳ 40%, see Figure <ref> and Table <ref>), and thus it may be easier to simultaneously detect the EM signals corresponding to the first images with the second ones. One may also similarly consider the simultaneous detection of the third and fourth EM images of those lensed GW events with quadruple images according to Figure <ref>. We also note here that the time delay between different image pairs is also relevant to the GW detection of these lensed events. If the time delay between a image pair is shorter than the lifetime of the merger event observed by the GW detectors, the GW signals from both images may blend or interfere with each other, which leads to additional complexity in the data analysis. LIGO/Virgo can collect the GW signals of BNS mergers with a duration of ∼ 100 s, while future GW detectors, i.e., ET and CE, may detect the BNS merger GW signals over a period of ∼ 1000 s, depending on the performance of their sensitivity at the low frequency end <cit.>. Almost all the cases with double images have Δ T_12>1000 s and thus the two GW signals can be well resolved in the time domain. For those lensed GW events with quadruple images, the situation is similar to the cases with double images, except that the time delay for the image pair `quad(23)' could be smaller than 1000 s for a relatively large fraction (∼ 5%-30%) of lensed events (see Table <ref> and Figure <ref>). Nevertheless, it should be safe to ignore the cases of GW signal overlap for the lensed BNS mergers. The angular separation between a pair of lensed EM images must be large than the angular resolution of a telescope in order to resolve these two images if they are bright enough to be detected. As seen from Figures <ref> and <ref>, the image pairs for most lensed GW events have angular separations larger than 0.1 arcsec and some of them (36.2% for those with double images, 5.9-37.6% for those with quadruple images) have angular separation larger than 1 arcsec. Our results are broadly consistent with the angular separation for lensed quasars <cit.> and for lensed galaxies <cit.>. This suggests that the EM images of most EM and GW jointly detected lensed events can be resolved by spaceborne and ground-based telescopes with angular resolution as high as ≲ 0.1 arcsec, such as CSST, RST, JWST, and the next generation ground-based giant telescopes (like TMT, GMT and EELT) with adaptive optics if those EM signals are detectable, but those telescopes with angular resolution limited by seeing (≳ 0.7 arcsec; e.g., Rubin) may be difficult to resolve the lensed images separately. For those images that cannot be spatially resolved, the EM signals from different images may be blended with each other and thus lead to more complicated LCs. One possible method to overcome this is combining the time delays and LCs, by which LCs of the EM counterparts could show re-brightening signatures, through which those un-resolvable lensed events can be identified <cit.>. § CONCLUSIONS AND DISCUSSIONS In this work, we investigate the detectability of both the GW signals from lensed BNS mergers and their EM counterparts. We estimate the detectable rates of both the GW and EM signals from these mergers by future powerful GW detectors and telescopes by using the SIE lens model and simple models for the EM counterparts (afterglows and kilonovae) via the Monte Carlo method. Our main conclusions are summarized as follows. * ET is expected to detect 5.32^+26.1_-5.1 strongly lensed GW events from BNS mergers per year with at least two images having SNR larger than 8, while CE stage-2 may detect 67.3^+332_-64.7 such events per year. Among those lensed events, 4.10^+20.2_-3.9 for ET and 63.7^+315_-61.1 for CE stage-2 are expected to be detected with all the lensed images being detectable. LIGO Voyager may detect one within a period less than a decade. However, it is difficult for LIGO A+ to detect even a single lensed BNS merger within a period less than a decade. * The fraction of the detectable EM counterparts associated with those lensed GW events from BNS mergers that can be identified by a searching survey/telescope depends on the adopted filter and its limiting magnitude m_ lim. Adopting the “redder” band filters (e.g., H-band) can be more efficient in searching for the EM counterparts than adopting the “bluer” bands (e.g., z- and g-band). At m_ lim≲ 26, the detectable rate of H-band is at least one order of magnitude higher than that of z- or g-band given the same limiting magnitude condition. One should be cautious that this fraction estimation may be affected by some uncertainties, including those in the EM counterpart modelling, kilonova luminosity function. In addition, the response time of telescopes can also affect this fraction. * For realistic optical/infrared observation of those EM counterparts, we find that afterglow observation could be more optimistic than kilonovae observation in the first hour of telescope follow-up searches. On the contrary, kilonovae observation overwhelms in the later ∼ 1-3 days of observation. * RST-like and JWST-like telescopes can detect the EM counterparts of upto 3.07^+15.17_-2.95 yr^-1 and 2.81^+13.89_-2.70 yr^-1 lensed BNS merger GW events per year in the era of CE stage-2. With the observation over a period of several years or more, the cumulative number of such events will be sufficient for cosmological applications. However, Rubin-like, CSST-like, and Euclid-like telescopes can hardly have the chance to detect even a single event. * Redshift distributions of those lensed GW+EM events peak at z ∼ 0.6-0.9/1.2-1.6 if they are detected by using the z-/H-band with m_ lim = 24 mag, and the peaks shift to higher redshift (z ∼ 2) if m_ lim=30 mag. CE tends to detect more high redshift events than ET. In addition, the distribution of magnification factor logμ of the second arrived lensed images peaks around 0.2-0.5/0.5-0.8 for double-image/quadruple-image cases with limiting magnitude m_ lim≳ 26. Adopting a lower limiting magnitude (e.g.,m_ lim = 24) for the searches, the resulting distributions peak around 0.5-0.7/0.9-1.2 for double-image/quadruple-image cases. * Among those lensed GW+EM cases, ∼ 10%-50% of double-image cases and ≳ 50% of quadruple-image cases have Δ T_12≲ 1 day. This means that the first arrived EM counterparts may still around their peak luminosity for those whose LC slowly decay (e.g., H-band LC of kilonova), and can be detected simultaneously with second arrived images. Similar consideration can also be taken for simultaneous detection of the third and fourth arrived images. Once we detect those multiple lensed image pairs, for RST and JWST that have angular resolution ∼ 0.1 arcsec, most of the lensed events (≳ 85%) will be resolved except for the `quad(23)'. The results obtained in this paper are instructive for future realistic searches of the lensed GW+EM events, which differ from those without considering lensing effect. One may have to take extra efforts for a successfully search of these lensed EM counterparts. First, it is required to quickly identify the lensed GW signals, i.e., the time between receiving two potential lensed GW alerts and identifying them as coming from the same lensed source needs to be as short as possible. Second, the lensed host galaxy signals also need to be quickly identified via all-sky galaxy surveys. One may compare the vague localization of the source by GW detectors (typically ∼ 1-10 deg^2 for the third generation GW detector network) with the future complete deep all-sky survey of the lensed galaxies, and match the BNS merger with its lensed host galaxies through the observed time-delay and amplification information. Likewise, the time consumed by this step should be also as short as possible. Third, the response time of telescopes should not be too long so as not to miss those transients before they fade away. We have investigated a number of factors that may affect the estimates for the rate of detectable GW+EM events, including the uncertainties in the BNS merger rate, different criteria set for the identification of those lensed events, different detectors and telescopes. However, there are other factors which may also affect the rate estimates. For example, we adopt simple models for kilonova and afterglow, for which a number of model parameters are fixed at values preferred by current observations of GW170817 and sGRBs (see Section <ref>). In reality, each of these parameters may be different for different kilonovae and sGRBs. One may need to consider the distribution of these parameters to improve the estimates. In addition, the simplified kilonova model adopted in this paper is not taking the viewing angle and anisotropy into consideration. To consider the anisotropy of kilonova, it is required to specify the geometry, velocity, opacity, and other properties of both the dynamical ejecta and disk wind, which may be better understood by future NR simulations and constrained by observations. With many more detections of the kilonovae and afterglows in the future, we believe that both the kilonova and afterglow models can be improved better. We note here that only single bandpass filters are considered to obtain the estimate for simplicity. One may need to further consider multiple bandpass filters for the searching of GW+EM counterpart by each specific (survey) telescope, which may also lead to some uncertainties in the rate estimates for the detectable GW+EM events. In addition, if the localization of some source can be determined precisely, it is reasonable to conduct EM searches in the given sky area with an exposure time longer and thus achieve a higher limiting magnitude than those listed in Table <ref>. Therefore, it may enable the detection of fainter images and leads to higher detectable lensed event rate. As stated before, a short response time of telescopes do not contribute to the detectability of kilonovae. However, a longer response time for GW trigger which may be weeks will decrease the detectability of both the kilonovae and afterglow depending on their decay rate of LCs. Note also that the mission time overlap of different future GW detectors and EM telescopes is not considered for the estimates presented in this paper. Rubin is about to coming into observation. JWST has started its observing and proved its powerful capability in the last few months. Euclid/CSST/RST are set to be launched in the next few year. LIGO will finish its upgrade to LIGO A+ by mid-2020s and then upgrade again to LIGO Voyager. ET and CE stage-1 may begin operation in 2030s, which may be at the twilight of the above EM telescopes designed lifetime. Possible extension of the mission time of these EM telescopes might overlap with the era of ET and CE stage-1, or even beyond. Considering this time line, it would be extremely lucky if a lensed GW+EM event could be detected in this decade, but it is possible to detect one or more lensed GW+EM events per year by ET/CE stage-1 and Euclid/RST/JWST-like telescopes in the next decade (2030s), and more than ten lensed GW+EM events per year in the era of CE stage-2 in 2040s. Furthermore, the next generation ground-based giant telescopes, i.e., Thirty Meter Telescope (TMT), Giant Magellan Telescope (GMT), and the European Extremely Large Telescope (EELT), are not considered in the present paper. These telescopes can achieve both deep field (∼27 mag) and high resolution (≲ 0.1 arcsec) <cit.>, which may be also helpful in observing the EM counterparts of the lensed BN merger GW events. § ACKNOWLEDGEMENTS We thank the referee for helpful comments and suggestions. We thank Furen Deng for useful discussions. This work is partly supported by the National Natural Science Foundation of China (Grant Nos. 12273050, 11873056, 11690024, 11991052), the National Key Program for Science and Technology Research and Development (Grant No. 2020YFC2201400), and the Strategic Priority Program of the Chinese Academy of Sciences (Grant No. XDB 23040100). This research has made use of the SVO Filter Profile Service (http://svo2.cab.inta-csic.es/theory/fps/) supported from the Spanish MINECO through grant AYA2017-84089. § DATA AVAILABILITY The data underlying this article will be shared on reasonable request to the corresponding author. mnras
http://arxiv.org/abs/2307.02270v1
20230705131037
SVDM: Single-View Diffusion Model for Pseudo-Stereo 3D Object Detection
[ "Yuguang Shi" ]
cs.CV
[ "cs.CV", "cs.AI" ]
Journal of Class Files, Vol. 14, No. 8, August 2021 Shell et al.: A Sample Article Using IEEEtran.cls for IEEE Journals Copyright 20xx IEEE. Personal use of this material is permitted. However, permission to use this material for any other purposes must be obtained from the IEEE by sending an email to pubs-permissions@ieee.org. SVDM: Single-View Diffusion Model for Pseudo-Stereo 3D Object Detection Yuguang Shi, The authors are with the School of Automation, Southeast University, Nanjing 210096, China, and also with the Key Laboratory of Measurement and Control of Complex Systems of Engineering, Ministry of Education, Nanjing 210096, China (e-mail: syg@seu.edu.cn; xblu2013@126.com). August 1, 2023 =============================================================================================================================================================================================================================================================================================================== One of the key problems in 3D object detection is to reduce the accuracy gap between methods based on LiDAR sensors and those based on monocular cameras. A recently proposed framework for monocular 3D detection based on Pseudo-Stereo has received considerable attention in the community. However, so far these two problems are discovered in existing practices, including (1) monocular depth estimation and Pseudo-Stereo detector must be trained separately, (2) Difficult to be compatible with different stereo detectors and (3) the overall calculation is large, which affects the reasoning speed. In this work, we propose an end-to-end, efficient pseudo-stereo 3D detection framework by introducing a Single-View Diffusion Model (SVDM) that uses a few iterations to gradually deliver right informative pixels to the left image. SVDM allows the entire pseudo-stereo 3D detection pipeline to be trained end-to-end and can benefit from the training of stereo detectors. Afterwards, we further explore the application of SVDM in depth-free stereo 3D detection, and the final framework is compatible with most stereo detectors. Among multiple benchmarks on the KITTI dataset, we achieve new state-of-the-art performance. 3D object detection, view synthesis, autonomous driving. § INTRODUCTION Recent exciting solutions that generate Pseudo-Sensor representations from Monocular camera utilize pretrained monocular depth estimation network. For example, Pseudo-Stereo present an approach to infer a virtual view of a scene from a single input image, followed by applying LIGA-Stereo <cit.>, which is an existing Stereo-based detector. Pseudo-Stereo achieves 17.74 AP_3D at the moderate case on the KITTI benchmark<cit.>. While pseudo-stereo is conceptually intuitive, the method for generating virtual views from depth maps suffers from some limitations: 1) Although virtual views do not require real actual views in the dataset for training but still require depth ground truth to train the monocular depth estimation network, collecting large and diverse training datasets with accurate ground truth depth for supervised learning <cit.> is a tedious and difficult challenge in itself, so this approach inevitably increases the burden on the model. 2)The pseudo-stereoscopic approach synthesizes a pair of stereo images by forward warping. As shown in Figure 2, due to the nature of forward warping, the pseudo-right image will contain pixel artifacts that are lost due to occluded regions and in some places collisions will occur when multiple pixels will land in the same location, creating visually unpleasant holes, distortions and artifacts, thus not exploiting the potential of image-level generation for pseudo-stereoscopic 3D detection very well. 3)Stereo 3D detectors detect a variety of principles. While some of the current higher accuracy methods in the KITTI dataset ranking include a rigid accuracy depth estimation network, some geometry-based methods still have the advantages of their simplicity of principle, fast inference and scalability in low-cost scenarios. However, Feature-level Generation in pseudo-stereology is difficult to be applied directly to these methods, and has the disadvantage of limited fitness. A natural question to ask, however, is whether it is possible to design a new perspective generator without depth estimation networks at the image-level? In the recent literature, diffusion not only provides significantly simpler architectures, but also offers fewer hyperparameters and simpler training steps than the notoriously difficult to train GAN. While diffusion models can generate high quality images, no study has yet demonstrated that diffusion models remain effective for the task of pseudo-view generation for stereo 3D detection. Considering the above challenges, this study develop a new Single-View Diffusion Model (SVDM) for the high quality, spatially consistent virtual view synthesis in real scenarios. Specifically, our method assumes that the left image in stereo views is known, replaces the Gaussian noise of the diffusion model with left image pixels during training or testing, and gradually diffuses the pixels of the right image to the full image. Benefiting from the subtle disparity of pixels in stereo images, a few iterations can produce promising results. Note that the ground truth actual views in the dataset are only used in training. Compared to prior work, SVDM discards the monocular depth estimation network and provides a simple end-to-end approach, so the resulting framework is compatible with most existing stereo detectors and depth estimators. To the best of our knowledge, SVDM is the first diffusion model approach to generate virtual views from a single image input without depth estimation networks and geometric priors. Our contributions are summarized as follows: ∙We introduce SVDM, an image-to-image diffusion model for pseudo-stereoscopic view generation tasks without geometric priors and depth estimation networks. SVDM provides competitive results compared to current monocular 3D detectors on the KITTI-3D benchmark. ∙We introduce three new diffusion model approaches for transforming new view generation tasks into image-to-image translation tasks. ∙We introduce ConvNeXt-UNet, a new UNet architectural variant for new view synthesis, showing that architectural changes are crucial for high-fidelity results. § RELATED WORK In this part, we briefly review the literature on monocular 3D object detection ,view synthesis and diffusion models in recent years. Monocular 3D detection: According to the input representation, monocular 3D detectors are roughly divided into image-based methods and depth-based methods. Image-based methods focus on reducing the dimensionality of 3D problems to 2D or 2.5D problems to save the amount of calculation from depth estimation networks. A few works <cit.> introduce perspective projection model to calculate depth information, but projection process introduces the error amplification problem, hurting the performance of deep inferences. M3D-RPN <cit.> is the first anchor-based method, these 2D and 3D anchor boxes are placed on the image pixels, the depth parameter is encoded by projecting the 3D center location, and some works <cit.> have tried to improve this method. CenterNet <cit.> is an anchor-free 2D detector that has a profound impact on 3D detection by applying multiple heads to predict 3D properties, and a series of improved methods <cit.> based on point features have been proposed. Inspired by the success of monocular depth estimation networks, performances of state-of-the-art depth-based methods aggregate image and depth features to obtain depth-aware features due to the geometric information loss during imagery projection. Mono3D <cit.> exploits segmentation, context and location priors to generate 3D proposals. MonoGRNet <cit.> employs sparse supervision to directly predict object center depth, and optimizes 3D information through multi-task learning. D4LCN <cit.> proposes depth-guided dynamic expansion local convolutional network, which address the problem of the scale-sensitive and meaningless local structure in existing works. DDMP <cit.> alleviates the challenge of inaccurate depth priors by combining multi-scale depth information with image context. A line of Transformer-based methods <cit.> have a similar pipeline in that encode depth information into a 2D detector named detr. Another family of Pseudo-LiDAR architecture such as <cit.>, back-projects depth map pixels into point-cloud 3D coordinates, and then apply ideas of point-cloud based detector. These methods narrow the accuracy gap between monocular and lidar and can be continuously improved by subsequent depth estimation networks and point-cloud based detectors. RefinedMPL <cit.> uses PointRCNN <cit.> for point-wise feature learning in a supervised or an unsupervised scheme from pseudo point clouds prior. AM3D <cit.> uses a PointNet <cit.> backbone for point-wise feature extraction from pseudo point clouds, and employs a multi-modal fusion block to enhance the point-wise feature learning. MonoFENet <cit.> enhances the 3D features from the estimated disparity for monocular 3D detection. Decoupled-3D <cit.> recovers the missing depth of the object using the coarse depth from 3D object height prior with the BEV features that are converted from the estimated depth map. Pseudo-Stereo <cit.> further proposes the intermediate stereo representation for converting monocular imagery data to Pseudo-LiDAR signal. Despite the improvement of Pseudo-Stereo, its novel virtual view synthesis methods have certain limitations in the scope of application of stereo detectors. Novel View Synthesis: Novel view synthesis is a highly ill-posed problem that focuses on generating new views of scenes. The classic work uses the depth map to forward warp the image pixels into the novel views. In order to overcome the challenging problem that large quantities of ground-truth depth data are difficult to obtain, some self-supervised methods <cit.> only use stereo raw images to train a model. To deal with holes, cracks, and blurs, there are also attempts to study the improvement of the quality of the synthetic images. Tulsiani et al. <cit.> propose a layed depth image (LDI) 3D representation to capture the texture and depth of the foreground and background. Stereo Magnification: Learning view synthesis using multiplane images<cit.> Chen et al. propose a learning framework based on multiplane images (MPIs), and a series of MPI-based methods <cit.> have been developed. Inspired by NeRF <cit.> with MPI <cit.>, MINE <cit.> achieve competitive novel view images and depth maps from a single input image. To reduce the influence of parallax on the network, SivsFormer <cit.> designed a warping and occlusion handing module to improve the quality of the synthetic images. Nonetheless, these methods heavily rely on specially designed pipelines or explicit geometric models. Recently, denoising diffusion models demonstrating great potential in various computer vision fields including super-resolution <cit.>, image generation <cit.>, object detection and segmentation <cit.>, etc. In this work, we consider the particular image generation task of P-stereo 3D detection, taking full advantage of binocular stereo lenses and exploiting diffusion models, and propose a novel geometrically free viewpoint generation framework, which we call SVDM. Our framework can be applied to both offline and online generation based on different diffusion model methods, and achieves good generation results without depth images and explicit geometric priors. § THE PROPOSED METHOD §.§ Preliminaries §.§.§ Stereo 3D Detector Stereo 3D object detection is a unique branch of 3D detection that aims to predict the location, size, orientation and category of an object in 3D space using only a stereo camera sensor. According to the type of training data, stereo-imagery-based methods can be generally divided into three types. The first type solely requires stereo images with corresponding annotated 3D bounding boxes. According to the type of training data, stereo image-based methods can be generally classified into three types. The first type only requires stereo images with corresponding annotated 3D bounding boxes, and this approach wants to take full advantage of the geometric relationships and pixel constraints of stereo images without using depth estimation networks, represented by TLNet <cit.>, Stereo R-CNN <cit.> and Stereo CenterNet <cit.>. The second type requires an additional depth map to train the data, and representative methods are pseudo-LiDAR family<cit.>, IDA-3D <cit.>, YOLOStereo3D, etc. The third type is called Volume-based method, which recodes 3D objects and locates 3D objects from 3D feature volume, represented by DSGN series methods and LIGA-Stereo. For a fair comparison and to demonstrate the scalability of our approach, we used three methods, stereo-rcnn, LIGA-Stereo and stereoyolo, as our base stereo 3D detection system, and the generated pseudo-stereo images were fed to all three methods. §.§.§ Denoising Diffusion Probabilistic Models A T-step Denoising Diffusion Probabilistic Model (DDPM) <cit.> consists of two processes: the forward process (also referred to as diffusion process), and the reverse inference process. The forward process q ( x _ t | x _ t - 1 ) is adding noise to the picture. For example, give a picture x _ 0, the forward process adds Gaussian noise to it through T times of accumulation to obtain x _ 1 , x _ 2 , ⋯ , x _ T. The step sizes are controlled by a variance schedule{β _ t E ( 0 , 1 ) } _ t = 1 ^ T. Each time t in the forward process is only related to time t - 1, so it can be regarded as a Markov process. q(𝐱_t|𝐱_t-1)=𝒩(𝐱_t ; √(1-β_t)𝐱_t-1, β_t𝐈) q(x_1, …, x_T|x_0)=∏_t=1^T q(x_t|x_t-1) If the forward process is the process of adding noise, then the reverse process is denoising process of diffusion. If the reversed distribution: q ( x _ t - 1 | x _ t ) is obtained, we can restore the original distribution from the complete standard Gaussian distribution. Unfortunately, we cannot easily estimate q ( x _ t - 1 | x _ t ) because it needs to use the entire dataset and therefore we need to learn a model p _ θ to approximate these conditional probabilities in order to run the reverse diffusion process. p_θ(x_0, …, x_T-1|x_T)=∏_t=1^T p_θ(x_t-1|x_t) p _ θ ( x _ t - 1 ) ( x _ t ) = N ( x _ t - 1 ; H _ θ ( x _ t , t ) , ∑ _ θ ( x _ t , t ) ) In one sentence, the diffusion model is to destroy the training data by continuously adding Gaussian noise, and then restore the data by learning the reverse denoising process. After training, the Diffusion Model can be used to pass randomly sampled noise into the model, and generate data through the learned denoising process. The training objective of DDPM is to optimize the Evidence Lower Bound (ELBO). Finally, the objective can be simplified as to optimize: 𝔼_x_0, ϵϵ-ϵ_θ(x_t, t)_2^2 where ϵ is the Gaussian noise in x_t which is equivalent to Δ _ x _ t ln q ( x _ t | x _ 0 ), ϵ_θ is the model trained to estimate ϵ. Most conditional diffusion models maintain the forward process and directly inject the condition into the training objective: 𝔼_x_0, ϵϵ-ϵ_θ(x_t, y, t)_2^2 Since p ( x _ t | y ) dose not obviously appear in the training objective, it is difficult to guarantee the diffusion can finally reaches the desired conditional distribution Except for the conditioning mechanism, Latent Diffusion Model (LDM) takes the diffusion and inference processes in the latent space of VQGAN, which is proven to be more efficient and generalizable than operating on the original image pixels. §.§ Single-View Diffusion Model The proposed framework views the new view generation task as an image-to-image translation (I2I) task based on diffusion model, which takes a single source image captured by a camera as input. And aim to generate a predicted view. While standard diffusion models contaminate and restore images with Gaussian noise, in this work we consider three novel diffusion methods for establishing a mapping between the input and output domains. The pipeline of the proposed method is shown in Fig. 3, which Our three diffusion model methods are presented in Section 3.2, including the Gaussian noise operator in Section 3.2.a, the view image operator in Section 3.2.b, and the one-step generation in Section 3.2.c. §.§.§ Gaussian Noise Operator For diffusion probabilistic models used for an image generation task, the forward diffusion process of the model adds noise to a clean source image until the image is standard normal distribution, and the reverse inference process maps the noise back to the image, however this approach is not suitable for the vast majority of downstream tasks. To learn the translation between two different view domains directly in the bidirectional diffusion process of the diffusion model, following BBDM <cit.>, we use the Brownian Bridge diffusion process instead of the existing DDPM methods. A Brownian bridge is a continuous-time stochastic model in which the probability distribution during the diffusion process is conditioned on the starting and ending states. Specifically, the state distribution at each time step of a Brownian bridge process starting from point x _ 0 ∼ q _ d a t a ( x _ 0 ) at t = 0 and ending at point x _ T at t = T can be formulated as: q _ BB ( x _ t | x _ 0 , y ) = 𝒩( x _ t ; ( 1 - m_ t ) x _ 0 + m_ t y ,δ_ tI ) where m _ t = t / T, δ_ t is the variance, to avoid the problem that large variance may cause the framework to fail to train properly, a schedule of variance for Brownian Bridge diffusion process can be designed as: δ_t =1-((1-m_t)^2+m_t^2) =2(m_t-m_t^2) = 2s(m_t-m_t^2) where s is the scaling factor set to 1 by default, and the value of s is adjusted to control the diversity of samples. The complete forward process can be described as follows, when t=0, we get m_ 0=0 with mean equal to x_ 0 and probability 1 and variance δ_ t=0. When the diffusion process reaches the target t = T, we get m_ T=1, x_ T=y and variance δ_ T=0. The intermediate state x_ t is calculated in discrete form as follows: x _ t = ( 1 - m _ t ) x _ 0 + m _ t y + √(δ)ϵ _ t x _ t - 1 = ( 1 - m _ t - 1 ) x _ 0 + m _ t + y + √(δ _ t - 1 )ϵ _ t-1 where ϵ _ t , ϵ _ t - 1 ∼ N ( 0 , I ). The expression of x _ 0 in equation (6) is substituted into equation (7) to obtain the transition probability q _ BB ( x _ t | x _ t - 1 , y ): [ q_B B(x_t|x_t-1, y)=𝒩(x_t ; 1-m_t/1-m_t-1x_t-1.; . +(m_t-1-m_t/1-m_t-1 m_t-1) y, δ_t | t-1I) ] where δ_t | t-1 is calculated by ϵ _ t as: δ _ t | t - 1 = δ _ t - δ_ t - 1 ( 1 - m _ t ) ^ 2 / ( 1 - m _ t - 1 ) ^ 2 In the reverse process of our method, the diffusion process starts from a source image sampled from a known view, and step by step to get the target view distribution. That is, predicting x_ t - 1 based on x_ t. p_θ(x_t-1|x_t, y)=𝒩(x_t-1 ; μ_θ(x_t, t), δ̃_tI) where μ_θ(x_t, t) is the predicted mean value of the noise, which needs to be learned by a neural network with parameter θ based on the maximum likelihood criterion. δ̃_t is the variance of noise at each step, which does not have to be learned and is expressed in the analytic form as δ _ t = δ _ t | t - 1 ·δ _ t - 1 /δ _ t. The whole training process and sampling process are summarized in Algorithm 1 and Algorithm 2. §.§.§ View Image Operator However, the Brownian Bridge diffusion process introduces additional hyperparameters that increase the flow and complexity of the experiment. To overcome this, we propose a View Image Operator-based method, specifically, we treat the target image as a special kind of noise and iteratively convert the target image to the source image. Given initial state x0 and destination state y, the intermediate state xt can be written in discrete form as follows: x _ t = √(α _ t )x + √( 1 - α _ t ) z Note this is essentially the same as the noising procedure, but instead of adding noise we are adding a progressively higher weighted Novel view image. In order to sample from the learned distribution, we use Algorithm 3 to reverse the View-Image transformation. Following <cit.> ,this method simply uses a schedule in terms of α _ t to interpolate. α _ t = f ( t ) / f ( 0 ) , f ( t ) = cos ( t / T + s / 1 + s ·π/ 2 ) ^ 2 Where s = 0.008. The difference between linear and cosine schedules is shown in Figure 4, where it can be seen that in the later stages of linear scheduling it is almost purely a target view, while cosine scheduling adds target views more slowly. §.§.§ Accelerated Sampling And One-Step Generation Despite their high-quality generation performance, DPMs still suffer from their slow sampling as they generally need hundreds or thousands of sequential function evaluations (steps) of large neural networks to draw a sample. In recent years, several studies have been devoted to reducing the steps of DPMs, such as <cit.>, .etc. For pseudo-stereo 3D detection, the slow new view generation speed can greatly hinder the detection and deployment, so we propose two schemes in this section for accelerating the inference process of SVDM. One is a method that adds a high-order solver for the guided sampling of DPMs, and the other is to improve the one-step generation method. §.§.§ Accelerated Sampling Similar to the basic idea of DDIM <cit.>, the inference processes of BBDM can be accelerated by utilizing a nonMarkovian process while keeping the same marginal distributions as Markovian inference processes. Now, given a sub-sequence of [ 1 : T ] of length 𝒮 { T _ 1 , T _ 2 , ⋯ , T _ S }, the inference process can be defined by a subset of the latent variables x _ 1 : T , which is { x _ T _ 1 , x _ T _ 2 , ⋯ , x _ T s }, [ q_B B(x_τ_s-1|x_τ s, x_0, y)=𝒩((1-m_τ_s-1) x_0+m_τ_s-1y+.; .√(δ_τ_s-1-σ_τ_s^2)1/√(δ_τ_s)(x_τ_s-(1-m_τ_s) x_0-m_τ_sy), σ_τ_s^2I) ] §.§.§ One-Step Generation In this section, our objective is to create generative models that facilitate efficient, single-step generation without sacrificing important advantages of iterative refinement. Following consistency models <cit.>, these advantages include the ability to trade-off compute for sample quality when necessary, as well as the capability to perform zeroshot data editing tasks. As illustrated in Fig. 5, we build on top of the probability flow (PF) ordinary differential equation (ODE) in continuous-time diffusion models <cit.>, whose trajectories smoothly transition the data distribution into a tractable noise distribution. We propose to learn a model that maps any point at any time step to the trajectory is starting point. A notable property of our model is self-consistency: points on the same trajectory map to the same initial point. Consistency models allow us to generate data samples (initial points of ODE trajectories, e.g., x_0 in Fig. 5) by converting random noise vectors (endpoints of ODE trajectories, e.g., x_T in Fig. 5) with only one network evaluation. Importantly, by chaining the outputs of consistency models at multiple time steps, we can improve sample quality and perform zero-shot data editing at the cost of more compute, similar to what iterative refinement enables for diffusion models. eliminates the need for a pre-trained diffusion model altogether, Consistency models allowing us to train a consistency model in isolation. This approach situates consistency models as an independent family of generative models. More formula derivation, please see the original paper. §.§ Model Architecture Following the Latent diffusion model (LDM) <cit.>, SVDM performs generation learning in the latent space instead of raw pixel space to reduce computational costs. In the following, we briefly recall LDM and then introduce our ConvNeXt-UNet on the latent input. LDM employs a pretrained VAE encoder 𝐄 to encode an image v ∈ R ^ 3 × H × Wto a latent embedding z = E ( v ) ∈ R ^ c × h × w. It gradually adds noise to z in the forward process and then denoises to predict z in the reverse process. Finally, LDM uses a pre-trained VAE decoder 𝐃 to decode z into a high-resolution image v = 𝐃( z ). Both VAE encoder and decoder are kept fixed during training and inference. Since h and w are smaller than H and W, performing the diffusion process in the lowresolution latent space is more efficient compared to the pixel space. In this work, we adopt the efficient diffusion process of LDM. Given an image I_A sampled from domain A, we can first extract the latent feature L_A, and then the proposed SVDM process will map L_A to the corresponding latent representation L _ A → B in domain B. Finally, the translated image I _ A → B can be generated by the decoder of the pre-trained VQGAN <cit.>. As shown in Fig. 5, the SVDM model simply connects two images along the channel dimensions and uses the standard U-Net <cit.> architecture with a ConvNeXt residual block <cit.> for upsampling and downsampling the activations, reaching large receptive fields with stacked convolutions to take advantage of context information in images. This “Concat-UNet” has found significant success in prior work of image-to-image diffusion models. In addition, we introduce multiple attention blocks at various resolutions, in light of the discovery that global interaction significantly improves reconstruction quality on much larger and more diverse datasets at higher resolutions. §.§ Loss Functions There are four terms in the loss function: RGB L1 loss ℒ_1, RGB SSIM loss ℒ_ssim, and the perceptual loss ℒ_latent from <cit.>. The total loss ℒ is given by: ℒ=λ_L 1ℒ_L 1+λ_ssim ℒ_ssim ++λ_1 a t e n tℒ_1 a t e n t where λ _ L 1 , λ _ s s i m and λ _ 1 a t e n t are hyperparameters to weigh the respective loss term. §.§.§ RGB L1 and SSIM Loss. The L1 and SSIM <cit.> losses: ℒ_L 1 = 1/3HW∑ | Î _ t g t - I _ t g t | ℒ _ s s i m = 1 - S SI M ( Î _ t g t , I _ t g t ) are to encourage the synthesized target image Î _ t g t to match the ground truth Itgt. Both Î _ t g t and I _ t g t are 3-channel RGB images of size H × W. §.§.§ Perceptual Loss. Perceptual compression model is based on previous work <cit.> and consists of an autoencoder trained by combination of a perceptual loss <cit.> and a patch-based <cit.> adversarial objective <cit.>. This ensures that the reconstructions are confined to the image manifold by enforcing local realism and avoids bluriness introduced by relying solely on pixel-space losses such as L2 or L1 objectives. ℒ_latent =1/2∑_j=1^J[(u_j^2+σ_j^2)-1-logσ_j^2] § EXPERIMENTS §.§ Datasets For novel view synthesis and 3D detection, we perform both quantitative and qualitative comparisons with state-of-the-art methods on the KITTI datasets <cit.>. §.§.§ View Synthesis According to the suggestions of Tulsiani et al. in <cit.>, we randomly choose 22 sequences from the whole data for training, and the remaining 8 sequences are equally divided by validation set and test set. The training set contains about 6000 stereo pairs, the test sequences set contains 1079 image pairs, and the images contain a large number of occlusions, such as cars, pedestrians, traffic lights, etc.We use the left camera image as the source image and the other as the target view image. Following <cit.>, we crop 5% from all sides of all images before computing the scores in testing. §.§.§ 3D Detection KITTI 3D object detection benchmark comprises 7481 training images and 7518 test images, along with the corresponding point clouds captured around a midsize city from rural areas and highways. KITTI provides 3D bounding box annotations for 3 classes, Car, Cyclist and Pedestrian. Commonly, the training set is divided into training split with 3712 samples and validation split with 3769 samples following that in <cit.>, which we denote as KITTI train and KITTI val, respectively. All models in ablation studies are trained on the KITTI train and evaluated on KITTI val. For the submission of our methods, the models is trained on the 7481 training samples. Each object sample is assigned to a difficulty level, Easy, Moderate or Hard according to the object is bounding box height, occlusion level and truncation. §.§ Evaluation Metrics §.§.§ Novel View Synthesis To measure the quality of the generated images, we compute the Structural Similarity Index (SSIM), PSNR, and the recently proposed LPIPS perceptual similarity. We use an ImageNet-trained VGG16 model when computing the LPIPS score. §.§.§ Stereo 3D Detection We use two evaluation metrics in KITTI-3D, i.e., the IoU of 3D bounding boxes or BEV 2D bounding boxes with average precision (AP) metric, which are denoted as AP_3D and AP_BEV, respectively. Following the monocular 3D detection methods <cit.>, we conduct the ablation study on Car. KITTI-3D uses the A P |_ R40 with 40 recall points instead of A P |_ R11 with 11 recall points from October 8, 2019. We report all the results in A P |_ R40. §.§ Implementation Details §.§.§ Novel View Synthesis In the training phase, the number of time steps was set to 1000, and we used an NVIDIA Tesla V100 GPU with 32G of memory, and the batch size was set to 16 with the same pre-trained VQGAN model as the Latent Diffusion model, and 45 epochs were performed in 3 days. For optimization, we use AdamW <cit.> optimizer with β (0.9, 0.999), weight decay 0.1 and dropout rate 0.1, and an exponential moving average (EMA) optimizer with a coefficient of 0.9999. In the inference phase, we used 1000 sampling steps for the methods without acceleration and for the methods with acceleration, the sampling steps were method dependent, as described in the ablation experiments. §.§.§ Stereo 3D Detection We use LIGA-Stereo, stereoyolo and stereocenternet as baselines for stereoscopic 3D detection according to the method. we use 2 NVIDIA RTX3090 GPU to train this networks. the LIGA-Stereo batch size is set to 2, the stereoyolo batch size is set to 2 and the stereocenternet batch size is set to 2. We use one model to detect different classes of objects (Car, Cyclist and Pedestrian) simultaneously, and other hyperparameter settings are the same as LIGA-Stereo, YOLOStereo3D and Stereo-CenterNet. §.§ Single-image-based View Synthesis Results §.§.§ Quantitative Results To prove the effectiveness of our approach, we conduct a large number of comparative experiments. The compared algorithms include DAM-CNN <cit.>, Tulsiani et. al. <cit.>, MPI <cit.> and MINE <cit.>. The quantitative experimental results are shown in Table 1. The test resolution of the images of all our approaches is set to 256× 768 to make a fair comparison. Our approach is significantly better than DAM-CNN,Tulsiani et. al., MPI. The PSNR of our approach can surpass SOTA after adopting EMSA and feature-level parallaxaware loss, and the SSIM and LPIPS scores are slightly inferior to SOTA <cit.>. §.§.§ Qualitative Results We also qualitatively demonstrate our superior view synthesis performance in Fig. 7. Obviously, Our approach has achieved competitive performance to the state-of-the-art method and synthesizes more realistic images with fewer distortions and artifacts compared with other methods. Compared to <cit.>, we generate more realistic images with lesser artefacts and shape distortions. The visualization verifies our ability to model the geometry and texture of complex scenes. §.§ 3D Object Detection Results In this section, we evaluate the proposed three pseudo-stereoscopic variants:BBDM, View-operator and one-step generation, on the KITTI test and val sets, and other monocular 3D detectors are compared. §.§.§ Quantitative Results The results reported in Table 2 and Table 3 show that the large interval performance of the method proposed in 3D target detection and 3D positioning is better than all other methods. Even if we only use BBDM as the basic diffusion model, the performance of the two tasks with 0.7 with the IOU threshold can be significantly better than the most advanced method, such as Monorcnn and DD3D. Generally speaking, better image generation can improve the performance of 3D target detection and positioning. We can see that the advantages of the View diffusion model are more significant compared to BBDM. Due to the same super reassembly, such as learning rate, average pixel, backbone network, and the size of the priority box, the View method has better performance, indicating that the View structure has better generalization capabilities for 3D target detection. When the IOU threshold is 0.7, compared with our baseline method PS-IM, it is slightly lower in simple samples, but the performance of 3D target detection and positioning tasks in suffering and medium samples has greatly improved, about 1 in 1, about 1 -2, these improvements prove the effectiveness of the method. We attribute small gaps on simple samples to limited constraints. Remember, our method directly uses the diffusion model to generate the right figure. Although we have added image translation as a constraint, compared with the depth diagram and geometric priority, the formation method is not completely controllable. Without matching the texture, the background and the obscure object inevitably bring interference to the new perspective generation. The Convnext-UNET proposed in this article can alleviate this problem, which has been proven in ablation research, but it is not perfect. In addition, we reported the evaluation results of the Kitti verification set. As shown in Table III, the method is obviously better than our previous methods D4LCN and the latest methods, such as DDMP-3D, Caddn, Monoflex, and Gupnet. Compared with the baseline method PS-IM and PS-FLD, there is only a weak gap in simple and medium, and two points are improved in difficulties. §.§.§ Qualitative Results We present the qualitative results of a number of scenarios in the KITTI dataset in the Figure 8. We present the corresponding stereo box, 3D box, and aerial view on the left and right images. It can be observed that in general street scenes, the proposed SC can accurately detect vehicles in the scene, and the detected 3D frame can be optimally aligned with the LiDAR point cloud. It also detected a few small objects that were occluded and far away. §.§ Ablation Study In this part, we will present the ablation study to verify the effectiveness of some important components of the proposed method. To investigate the effects of different components of our approach, we set up several different versions, as shown below: ∙Pedestrian and Cyclist 3D detection results. ∙Whether to speed up sampling. ∙Setting of the hyperparameter s in BBDM. ∙Latent+U-NET. ∙Latent+ConvNeXt-UNet. ∙Image size. ∙Different stereo detectors. ∙Different optimizers. ∙Performance of SSIM Loss. Pedestrian and Cyclist 3D detection results. In the KITTI object detection benchmark, the training samples of Pedestrian and Cyclist are limited; hence, it is more difficult than detecting car category. Because most image-based methods do not exhibit the evaluation results of Pedestrian and Cyclist, we solely report the available results of the original paper. We present the pedestrian and cyclist detection results on KITTI validation set in Table 4, SVDM achieves the best detection results except for pedestrian simple samples. The remaining ablation experiments were temporarily not completed due to time reasons. § CONCLUSION AND FUTURE SCOPE We propose SVDM, a new pseudo-stereo image 3D object detection method, and we solve the new single-view view synthesis problem as an image-to-image translation problem by combining it with the latest diffusion model. The proposed SVDM achieves the best performance without geometric priors, depth estimation and LIDAR monitoring, demonstrating that image-based methods have great potential in 3D. However, the proposed framework does not allow end-to-end training. Therefore, we can try to further refine and simplify the framework by end-to-end training while guaranteeing the detection performance. Another major limitation of the method is that the new view generation falls short of the SOTA method, and in the future, we will further add new components to this method to further improve the accuracy of the new view generation task. § ACKNOWLEDGMENTS This research work is supported by the Big Data Computing Center of Southeast University. IEEEtran
http://arxiv.org/abs/2307.03312v1
20230706215155
Reconstruction of generic anisotropic stiffness tensors from partial data around one polarization
[ "Maarten V. de Hoop", "Joonas Ilmavirta", "Matti Lassas", "Anthony Várilly-Alvarado" ]
math.DG
[ "math.DG", "math.AG", "math.AP", "Primary 86-10, 86A22, 14D06, Secondary 53Z05, 14P25, 14-04" ]
We study inverse problems in anisotropic elasticity using tools from algebraic geometry. The singularities of solutions to the elastic wave equation in dimension n with an anisotropic stiffness tensor have propagation kinematics captured by so-called slowness surfaces, which are hypersurfaces in the cotangent bundle of ^n that turn out to be algebraic varieties. Leveraging the algebraic geometry of families of slowness surfaces we show that, for tensors in a dense open subset in the space of all stiffness tensors, a small amount of data around one polarization in an individual slowness surface uniquely determines the entire slowness surface and its stiffness tensor. Such partial data arises naturally from geophysical measurements or geometrized versions of seismic inverse problems. Additionally, we explain how the reconstruction of the stiffness tensor can be carried out effectively, using Gröbner bases. Our uniqueness results fail for very symmetric (e.g., fully isotropic) materials, evidencing the counterintuitive claim that inverse problems in elasticity can become more tractable with increasing asymmetry. Photoinduced Anomalous Supercurrent Hall Effect I. G. Savenko August 1, 2023 =============================================== § INTRODUCTION Inverse problems in anisotropic elasticity are notoriously challenging: a lack of natural symmetry leaves one with few tools to approach them. In this paper we embrace and harness asymmetry with the help of algebraic geometry, and develop a method to address inverse problems around the reconstruction of anisotropic stiffness tensors from a relatively small amount of empirical data. Along the way we prove a surprisingly strong uniqueness result in anisotropic elastic inverse problems, aided by the specific properties of albite, an abundant feldspar mineral in Earth's crust. We view our results as the beginning of a fruitful interaction between the fields of inverse problems and modern algebraic geometry. Microlocal analysis, describing the geometry of wave propagation, and algebraic geometry, describing the geometry of zero sets of polynomials, become linked through the slowness polynomial, which is the determinant of the principal symbol of the elastic wave operator. The vanishing set of this polynomial is the slowness surface, which describes the velocities of differently polarized waves in different directions. Notably, we show that for a generic anisotropic material * the polarizations of waves travelling through the material, corresponding to different sheets of the slowness surface, are coupled: a small Euclidean open subset of the slowness surface for a single polarization determines the whole slowness surface for all polarizations; * one can reconstruct the stiffness tensor field of the material from a slowness polynomial. §.§ The model We work in ^n for any n; physical applications arise typically when n=2 or 3. §.§.§ Waves in anisotropic linear elasticity Linear elasticity posits that when a material is strained from a state of equilibrium, the force returning the system to equilibrium depends linearly on the displacement experienced. The displacement is described by the strain tensor (a symmetric n× n matrix) and the restoring force by a stress tensor (also a symmetric n× n matrix). Hooke's law, valid for small displacement to good accuracy, states that stress depends linearly on strain, and the coefficients of proportionality are gathered in a stiffness tensor. To be within the framework of linear elasticity, the displacement should be small, but one can also see the linear theory as a linearization of a more complicated underlying model. Thus, the stiffness tensor of a material at a point x ∈^n is a linear map c = c(x)^n× n→^n× n mapping strain ∈^n× n (describing infinitesimal deformations) to stress σ∈^n× n (describing the infinitesimal restoring force), given in components as σ_ij = ∑_k,l=1^n c_ijkl_kl. Since both (σ_ij) and (_kl) are symmetric n× n matrices, we must have c_ijkl = c_jikl = c_ijlk. This is the so-called minor symmetry of the stiffness tensor. In addition, the stiffness tensor is itself a symmetric linear map between symmetric matrices, which can be encoded in components as c_ijkl = c_klij. This condition is known as the major symmetry of the stiffness tensor. Scaling by the density ρ = ρ(x) of a material does not affect any of these properties, which leads to the reduced stiffness tensor a = a(x), whose components are a_ijkl := ρ^-1c_ijkl. Finally, a stiffness tensor is positive definite; combining the symmetries (<ref>) and (<ref>) and positivity leads to the following definition formalizing the properties just described. We say that a = (a_ijkl)∈^n× n× n× n is a stiffness tensor if a_ijkl = a_jikl = a_klij. If, in addition, for every non-zero symmetric matrix A∈^n× n we have ∑_i,j,k,l=1^n a_ijklA_ijA_kl > 0, then we say that a is positive. The set of stiffness tensors in ^n× n× n× n is denoted n, and the subset of positive ones is denoted n. Both sets carry a natural Euclidean topology. Denote the displacement from equilibrium of a material with stiffness tensor c at point x∈^n and time t∈ by u(x,t)∈^n. The time evolution of u(x,t) is governed by the elastic wave equation ∑_j,k,l=1^n ∂/∂x_j( c_ijkl(x) ∂/∂x_k u_l(x,t) ) - ρ(x)∂^2/∂ t^2u_i(x,t) = 0, which can also be written as □ u=0, where □=□_c,ρ is the matrix-valued elastic wave operator. §.§.§ Singularities and the principal symbol Let ξ∈ T^*_x^n be the momentum variable dual to x ∈^n and let ω∈ be the dual variable of time t∈. Following the terminology of microlocal analysis, a function u(x,t) is said to be singular at a point (x_0,t_0) if u(x,t) is not a C^∞-smooth function in any neighborhood of the point (x_0,t_0). A more precise description of singularities is given by the wave front set WF(u) of the function u(x,t), which consists of the points (x,ξ,t,ω) for which u(x,t) is non-smooth at (x,t)∈^n× in the direction (ξ,τ)∈^n×. See <cit.> for more details. The singularities of u(x,t) propagate by the null bicharacteristic flow of the matrix-valued principal symbol of □: σ(□)_il(x,ξ,t,ω) = -∑_j,k=1^nc_ijkl(x)ξ_jξ_k + ρ(x)δ_ilω^2. A propagating singularity is annihilated by the principal symbol, so a point (x,t,ξ,ω)∈ (^n×)× (^n×) can be in the wave front set of a solution u(x,t) of the elastic wave equation □ u(x,t)=0 only when ( σ(□)(x,ξ,t,ω) ) = 0. Due to the homogeneity of the equation of motion, the frequency of oscillation has no effect on the propagation of singularities. It is therefore convenient to replace the momentum ξ∈ T^*_x^n with the slowness vector p= (p_1,…,p_n) := ω^-1ξ∈ T^*_x^n. To make (<ref>) explicit, we recall that the Christoffel matrix Γ(x,p) is the n× n matrix whose il-th entry is Γ(x,p)_il = ∑_j,k=1^na_ijkl(x)p_jp_k. By (<ref>) and (<ref>), the Christoffel matrix is symmetric. If the stiffness tensor is positive, then the Christoffel matrix is positive definite. With this notation, the principal symbol becomes simply σ(□)=ω^2ρ(x)(Γ(x,p)-I_n) , where I_n is the n× n identity matrix, and condition (<ref>) can be rewritten as ( Γ(x,p)-I_n ) = 0. See Figure <ref> for an example of the set of p's that satisfy this condition. Equation (<ref>) can also be argued physically by freezing the stiffness tensor c(x) and density ρ(x) to constant values c(x_0) and ρ(x_0) and writing a plane wave Ansatz for the displacement field. This is less rigorous but relies on the same underlying ideas and leads to the same condition. The principal symbol can also be understood as a description of how the operator acts on plane waves. §.§.§ Polarization, slowness, and velocity The set of points S_x = {p∈ T^*_x^n : ( Γ(x,p)-I_n ) = 0 } is called the slowness (hyper)surface at the point x. A point p∈ T^*_x^n belongs to the slowness surface exactly when 1 is an eigenvalue of Γ(x,p); since the Christoffel matrix is 2-homogeneous in p, the slowness surface encodes the eigenvalue information of the Christoffel matrix. See Figure <ref> for an example of a slowness surface when n=2, where S_x is a curve. A priori, the slowness surface contains no information about the eigenvectors of the Christoffel matrix. These vectors are the polarizations of singularities and correspond to the direction of oscillation, whereas the slowness vector corresponds roughly to the direction of propagation. Our method is suited for situations where we observe only the singularities in space but not their polarizations. Singularities without polarization can aptly be called unpolarized phonons; the phonon is the particle (or wave packet) corresponding to the displacement field in wave–particle duality. The eigenvalues of the Christoffel matrix give rise to Hamiltonians — one for each polarization — that determine the time evolution of unpolarized phonons. The slowness surface is the union of the unit level sets of these Hamiltonians; it might not split cleanly into n branches and n well-defined and smooth Hamiltonians, due to degenerate eigenvalues, so we treat the slowness surface as a single object. §.§.§ An algebraic view Considering the slowness polynomial (Γ(x,p)-I_n) as a polynomial of degree 2n in the variables p_1,…,p_n, the slowness surface is an object of a very algebraic nature. Our core algebraic result, Theorem <ref> is that for generic a, the slowness surface is an irreducible algebraic variety. As our focus is on the analysis on a fiber of the cotangent bundle rather than the whole bundle, from now on we consider the point x∈^n fixed and drop it from the notation. Thus, for example, the Christoffel matrix will henceforth be denoted Γ(p). Similarly, we call the reduced stiffness tensor a=ρ^-1c the stiffness tensor for simplicity. §.§ Departure point The goal of a typical inverse problem in anisotropic linear elasticity is to reconstruct the stiffness tensor field from some kind of boundary measurement — or to prove that the field is uniquely determined by ideal boundary data. A good first step is to analyze the propagation of singularities microlocally from travel time data or hyperbolic Cauchy data[The hyperbolic Cauchy data is the set of all Dirichlet and Neumann boundary values of all solutions to the elastic wave equation. It is the graph of the Dirichlet-to-Neumann map.]. This turns the analytic inverse problem into a geometric one, where the task is to recover the geometry governing the propagation of singularities. In the anisotropic elastic setting this geometry is quite complicated. The innermost branch of the slowness surface, called the qP or quasi-pressure branch (see Figure <ref>), determines a Finsler geometry when the highest eigenvalue of the Christoffel matrix is non-degenerate. Finding the Finsler function on some subset of the tangent bundle amounts to finding a subset of the slowness surface at some or all points. For other polarizations (qS), the unit cosphere — which is a branch of the slowness surface — may fail to be convex. It is also common for the slowness surface to have singular points where two branches meet (see <cit.> for details), which is an obstruction to having a smooth and globally defined Finsler geometry for slower polarizations. Despite these issues, at most points x ∈^n and in most directions p∈ T_x^*^n the Christoffel matrix Γ(x,p) has n different eigenvalues. In the neighborhood of such a point (x,p) on the cotangent bundle the elastic wave equation (<ref>) can be diagonalized microlocally and any solution splits nicely into n different polarizations. It is also this non-degenerate setting where our description of propagation of singularities is valid without additional caveats. Ideally, the solution of such a geometric inverse problem produces a qP Finsler geometry in full. The unit cosphere of this geometry at each point is the qP branch of the slowness surface, e.g., <cit.>. In some cases the recovery is not full but only a part of the slowness surface can be reconstructed; see <cit.>. In several applications one measures only the arrival times of the fastest waves which give the travel times related to the qP polarized waves. Further investigation will surely lead to mathematical results that provide full or partial information of the slowness surface of a single polarization. For our purposes, it is irrelevant where the partial knowledge of the slowness surface comes from, only that this information is indeed accessible. Our contribution is to take the next step: We prove that generically a small subset of one branch[It is unimportant whether the small patch of the slowness surface we start with is qP or some other polarization. We mainly refer to qP only because it is the easiest polarization to measure both physically and mathematically, corresponding to the fastest waves and a well-behaved Finsler geometry. ] of the slowness surface determines uniquely the entire slowness surface (with all branches) and the stiffness tensor field. No polarization information is required as an input to our methods. Taking the determinant of the principal symbol amounts to ignoring polarization information. However, we can fill in the polarizations of the singularities after reconstructing the stiffness tensor field. §.§ Algebraic goals The n different branches of the slowness surface, each corresponding at least locally to a polarization, are coupled together by the simple fact that they are all in the vanishing set of the same polynomial — the slowness polynomial. If we can show that this polynomial is irreducible (i.e., cannot be written as a product of two polynomials in a non-trivial way), then a small open subset of the slowness surface can be completed to the whole slowness surface by taking its Zariski closure, i.e., taking the vanishing set of a polynomial of the right shape that interpolates the known small set. Physically, this means that a small (Euclidean) open subset of the slowness surface, e.g., an open subset of the sheet of the slowness surface associated to qP-waves, determines both the entire sheet, as well as the sheets of the slowness surface associated to qS-polarized waves. Even though the sheet associated to qP-polarized waves and the sheets associated to qS-polarized waves are disjoint in the Euclidean topology, we will show that for stiffness tensors in a generic set these sheets all lie in the same connected component in the complex Zariski topology of the slowness surface. We note that determining all the sheets of a slowness surfaces from a small open subset of the qP-sheet is impossible for some stiffness tensors. For example, fully isotropic stiffness tensors parametrized by the two Lamé parameters, which describe many real materials, give rise to the slowness polynomials of the form P(p) = (c_P^2p^2-1) (c_S^2p^2-1)^n-1, where c_P and c_S are the pressure and shear wave speeds of the material respectively. Such a polynomial is manifestly reducible. The slowness surface for such materials consists of two concentric spheres, the inner one of which is degenerate if n>2. The two radii are independent. If we know a small subset of the pressure sphere, taking the Zariski closure completes it into the whole sphere but contains no information about the other sphere. This, however, is an exceptional property of a highly symmetric stiffness tensor. We therefore set out to prove that the slowness polynomial is irreducible for most stiffness tensors. Once the whole slowness surface or slowness polynomial is recovered, we can use it to reconstruct the stiffness tensor. The reason why this works is less obvious, but it will be explained in <ref> once we lay out some preliminaries. From the point of view of inverse problems in analysis, geometry, and linear elasticity, we need two algebraic results that are straightforward to state but less straightforward to prove. These key results are given in <ref> below, with the definitions for Theorem <ref> detailed in <ref>. The algebraic results mentioned below hinge on the more technical results given in <ref>. §.§ Main results We have two main results on inverse problems: [Uniqueness of stiffness tensor from partial data] Let the dimension of the space be n∈{2,3}. There is an open and dense subset U⊂n of the set of positive stiffness tensors so that the following holds: If a∈ U, then any non-empty Euclidean relatively open subset of the slowness surface corresponding to a determines the stiffness tensor a uniquely. The next theorem states roughly that, generically, a two-layer model of a planet with piecewise constant stiffness tensor field is uniquely determined by geometric travel time data for rays traversing the interior of the planet. The two layers are a highly simplified model of the Earth with a mantle and a core, both with homogeneous but anisotropic materials. We consider two data types for a two-layer model where in the outer layer Ω∖ω⊂^n the stiffness tensor is equal to A and in the inner layer ω⊂^n the stiffness tensor is equal to a. The first data type, denoted =(ω,Ω,a,A), contains only travel time information between boundary points, while the second data type, denoted =(ω,Ω,a,A), contains also directional information at the boundary. The precise definitions of these data sets are given below in Section <ref>. By X_1≈ X_2 for two subsets X_i of the same Euclidean space, we mean that there are dense open subsets U_i⊂ X_i for i=1,2 so that U_1=U_2. See <ref> for details of the various kinds of rays and data, and the notion of admissibility. [Two-layer model] Let n∈{2,3}. For both i=1,2, let ω_i, Ω_i⊂^n be nested domains such that Ω_1=Ω_2Ω. There is an open and dense subset U of stiffness tensors in the space of admissible pairs such that the following holds. For both i=1,2, suppose that a_i, A_i∈ U are admissible nested stiffness tensors. If (ω_1,Ω,a_1,A_1) ≈(ω_2,Ω,a_2,A_2), then A_1=A_2 and ω_1=ω_2. If (ω_1,Ω,a_1,A_1) ≈(ω_2,Ω,a_2,A_2), then A_1=A_2, ω_1=ω_2, and additionally a_1=a_2. * The equivalence ≈ ensures that exceptional rays play no role — possible exceptions include gracing rays, zero transmission or reflection coefficients, and cancellation after multipathing. All these issues are typically rare; e.g. the transmission and reflection coefficients are in many cases analytic functions with isolated zeros <cit.>. If one assumes that the full data sets are equal, then one might prove results like ours by only comparing which rays are missing from the data or behave exceptionally. To ensure that the conclusion is reached using well-behaved rays and that the omission or inclusion of a small set of rays (whether well or ill behaved) is irrelevant, we will only assume that the data sets are only almost equal. * The second part of the statement should be seen in light of the first one. Namely, the second assumption implies the first one, so the stiffness tensor A is uniquely determined by the data. Given this stiffness tensor in the outer layer, the knowledge of the slowness vector is almost the same as knowing the velocity vector or the tangential component or length of either one. Full directional data can be used to find which slowness vectors are admissible, and that in turn generically determines the stiffness tensor. The exact details are unpleasant, so the statement of Theorem <ref> has been optimized for readability rather than strength. The result as stated above can be adapted to other measurement scenarios. * All polarizations are included in the data of Theorem <ref>. The proof is based only on the fastest one (qP), but ignoring the other ones is not trivial. Even if incoming and outgoing waves at the surface are qP, there can be segments in other polarizations due to mode conversions inside. * Theorem <ref> was stated geometrically. Geometric data of this kind may be obtained from boundary data for the elastic wave equation (<ref>). It is not uncommon in geophysics to work directly with geometric ray data; see e.g. <cit.>. These results on inverse problems hinge on two algebraic results: [Generic irreducibility] The slowness polynomial associated to a generic stiffness tensor in dimension n ∈{2,3} is irreducible over . The word generic in Theorem <ref> is used in the sense of algebraic geometry: Let m be the number of distinct components of a reduced stiffness tensor. A generic set of stiffness tensors is a subset of ^m whose complement is a finite union of algebraic subsets of ^m of dimension ≤ m-1, each of which is defined by a finite set of polynomials. In our case, this complement parametrizes the collection of stiffness tensors giving rise to reducible slowness surfaces; it is not empty: we already know that the slowness polynomial (<ref>) associated to a fully isotropic stiffness tensor is reducible. In the spirit of modern algebraic geometry, we prove Theorem <ref> by considering all slowness polynomials at once, in a family f →^k_, where the coordinates at a point y = y(x) ∈^k = ^k() record the coefficients of a single slowness polynomial at x ∈^n, and the corresponding fiber f^-1(y) ⊂ is the slowness surface S_x. The principle of generic geometric integrality, due to Grothendieck, ensures that if the map f satisfies a few technical hypotheses, then there is a Zariski open subset of Y ⊂_^k such that the individual slowness surfaces in f^-1(Y) are irreducible, even over  (equivalently, their corresponding slowness polynomials are -irreducible). A Zariski open subset of _^k is dense for the Euclidean topology, as long as it is not empty. Thus, we must check by hand the existence of a single -irreducible slowness polynomial to conclude. In the case n=3, we use an explicit stiffness tensor, modelling a specific physical mineral, albite, to verify the non-emptiness of the set Y. This task is accomplished by reduction to modulo a suitably chosen prime (see Lemma <ref>—we have included a proof for lack of a reference to this tailored lemma, although we hope it will be useful in other inverse-problem contexts). Our second algebraic result shows that the correspondence between stiffness tensors and slowness polynomials is generically one-to-one: [Generic Unique Reconstruction] The slowness polynomial associated to a generic stiffness tensor in dimension n ∈{2,3} determines the stiffness tensor. We give two proofs of Theorem <ref> in the case n=2. The second proof ensues from studying the following question: Given a polynomial with real coefficients, what conditions must its coefficients satisfy for it to be a slowness polynomial? In other words: can we characterize slowness polynomials among all polynomials? We answer this question when n=2. When n=3, we give a proof of Theorem <ref> that does not rely on a characterization of slowness polynomials among all polynomials, because we lack sufficient computational power to crunch through the symbolic calculations required to complete this characterization. Theorems <ref> and <ref> are proved in <ref>; however, for the benefit of readers without much exposure to algebraic geometry, we explain the principles involved in <ref> in the case n = 2, to avoid clutter. §.§ Orthorhombic stiffness tensors All told, at one end of the symmetry spectrum, a fully isotropic stiffness tensor gives rise to a reducible slowness polynomial, and at the other end, a general fully anisotropic stiffness tensor gives rise to an irreducible slowness polynomial. What happens with stiffness tensors endowed with some symmetry that lies somewhere in between full isotropy and full anisotropy? In other words: Are there classes of stiffness tensors endowed with a small amount of symmetry whose generic member still gives rise to an irreducible slowness polynomial? In <ref>, we show that a slowness surface in dimension 3 associated to a generic orthorhombic stiffness tensor (Definition <ref>) is irreducible: see Theorem <ref>. In contrast with a general fully anisotropic slowness polynomial, a slowness polynomial associated to a material with orthorhombic symmetry can arise from more than one stiffness tensor, as had already been observed by Helbig and Carcione in <cit.>. They gave sufficient conditions for the existence of what they called “anomalous companions” of an orthorhombic stiffness tensor. We tighten their results to show that their conditions are also generically necessary: For a generic orthorhombic slowness polynomial P̃(p) (<ref>) there are exactly four (not necessarily positive) orthorhombic stiffness tensors that give rise to P̃(p). Following Helbig and Carcione <cit.>, we explain how to verify if an anomalous companion of an orthorhombic stiffness tensor satisfies the positivity condition required by the physical world. Surprisingly, the criterion involves Cayley's cubic surface, a central object in the classical algebro-geometric canon. §.§ Related results Motivated by seismological considerations, inverse boundary value problems in elasticity have been studied since 1907, when Wiechert and Zoeppritz posed them in their paper “Über Erdbebenwellen”(On Earthquake Waves) <cit.>; see also <cit.>. The first breakthrough results in elastostatics for isotropic media were by Nakamura and Uhlmann <cit.>, followed by results by Eskin and Ralston <cit.> for full boundary data and Imanuvilov, Uhlmann and Yamamoto <cit.> for partial boundary data. Stefanov, Uhlmann and Vasy <cit.> studied recovery of smooth P- and S-wave speeds in the elastic wave equation from knowledge of the Dirichlet-to-Neumann map in the isotropic case, see also <cit.> on the reconstruction of the density tensor. Beretta, Francini and Vessella <cit.> studied the stability of solutions to inverse problems. Uniqueness results for the tomography problem with interfaces, again, in the isotropic case, in the spirit of Theorem <ref>, were considered by Caday, de Hoop, Katsnelson and Uhlmann <cit.>, as well as by Stefanov, Ulhmann and Vasy <cit.>. The related inverse travel-time problem (for the corresponding Riemannian metric) has been studied in isotropic media using integral geometry in <cit.> and metric geometry in <cit.>. Anisotropic versions of the dynamic inverse boundary value problem have been studied in various different settings. Rachele and Mazzucato studied the geometric invariance of elastic inverse problems in <cit.>. In <cit.>, they showed that for certain classes of transversely isotropic media, the slowness surfaces of which are ellipsoidal, two of the five material parameters are partially determined by the dynamic Dirichlet-to-Neumann map. Before that, Sacks and Yakhno <cit.> studied the inverse problem for a layered anisotropic half space using the Neumann-to-Dirichlet map as input data, observing that only a subset of the components of the stiffness tensor can be determined expressed by a “structure” condition. De Hoop, Nakamura and Zhai <cit.> studied the recovery of piecewise analytic density and stiffness tensor of a three-dimensional domain from the local dynamic Dirichlet-to-Neumann map. They give global uniqueness results if the material is transversely isotropic with known axis of symmetry or orthorhombic with known symmetry planes on each subdomain. They also obtain uniqueness of a fully anisotropic stiffness tensor, assuming that it is piecewise constant and that the interfaces which separate the subdomains have curved portions. Their method of proof requires the use of the (finite in time) Laplace transform. Following this transform, some of the techniques are rooted in the proofs of analogous results for the inverse boundary value problem in the elastostatic case <cit.>. Cârstea, Nakamura and Oksanen <cit.> avoid the use of the Laplace transform and obtain uniqueness, in the piecewise constant case, closer to the part of the boundary where the measurements are taken for shorter observation times and further away from that part of the boundary for longer times. Under certain conditions, the dynamic Dirichlet-to-Neumann map determines the scattering relation, allowing a transition from analytic to geometric data. Geometric inverse problems in anisotropic elasticity have received increasing attention over the past few years. In the case of transversely anisotropic media the elastic parameters are determined by the boundary travel times of all the polarizations <cit.>. A compact Finsler manifold is determined by its boundary distance map <cit.>, a foliated and reversible Finsler manifold by its broken scattering relation <cit.>, and one can reconstruct the Finsler geometry along a geodesic from sphere data <cit.>. Linearizing about the isotropic case, that is, assuming “weak” anisotropy, leads to the mixed ray transform for travel times between boundary points. De Hoop, Saksala, Uhlmann and Zhai <cit.> proved “generic” uniqueness and stability for this transform on a three-dimensional compact simple Riemannian manifold with boundary, characterizing its kernel. Before that De Hoop, Saksala and Zhai <cit.> studied the mixed ray transform on simple 2-dimensional Riemannian manifolds. Linearizing about an isotropic case but only with conformal perturbations leads to a scalar geodesic ray transform problem on a reversible Finsler manifold, and the injectivity of that transform was established in <cit.> in spherical symmetry. Assuming lack of symmetry naturally leads to the occurrence of singular points in the slowness surface. This is inherent in exploiting algebraic geometry to obtain the results in this paper. However, the singular points lead to fundamental complications in the application of microlocal analysis to a parametric construction revealing the geometry of elastic wave propagation, see <cit.>. The points are typically associated with conical refraction <cit.>. §.§ Outline The paper is organized as follows. In <ref> we explain the general algebro-geometric framework underlying the proofs of Theorems <ref> and <ref> in dimension 2, where the number of parameters is small, making it easier to digest the ideas involved. We pivot in <ref>–<ref> to the study of inverse problems, setting up precise definitions for Theorems <ref> and <ref> in <ref> and giving proofs for these theorems in <ref>. In <ref> we prove Theorems <ref> and <ref>, as well as a version of Theorem <ref> for stiffness tensors with orthorhombic symmetry. §.§ Acknowledgements We thank Mohamed Barakat, Olivier Benoist, Daniel Erman, Bjorn Poonen, and Karen Smith for useful discussions around the algebro-geometric content of the paper. M.V. de H. was supported by the Simons Foundation under the MATH + X program, the National Science Foundation under grant DMS-2108175, and the corporate members of the Geo-Mathematical Imaging Group at Rice University. J. I. was supported by Academy of Finland grants 332890 and 351665. M. L. was supported by Academy of Finland grants 284715 and 303754. J. I. and M. L. were also supported by the Centre of Inverse Modelling and Imaging. A. V.-A. was partially supported by NSF grants DMS-1902274 and DMS-2302231, as well as NSF Grant DMS-1928930 while in residence at MSRI/SLMath in Berkeley (Spring 2023). He thanks François Charles for hosting him at the Département de Mathématiques et Applications of the École Normale Supérieur in Summer 2022, where part of this paper was written. § ALGEBRO-GEOMETRIC PRINCIPLES: A CASE STUDY IN DIMENSION 2 To illustrate how algebraic geometry bears on inverse problems in anisotropic elasticity, we consider a two-dimensional model, where a slowness surface is in fact a curve. An anisotropic stiffness tensor in this case is determined by six general real parameters: although such a tensor a = (a_ijkl)∈^16 has 16 components, once we take into account the major and minor symmetries of the tensor, only 6 distinct parameters are left. Following Voigt's notation (see <ref>), they are b_11 = a_1111, b_22 = a_2222, b_12 = a_1122 = a_2211, b_23 = a_2212 = a_2221 = a_1222 = a_2122, b_13 = a_1112 = a_1121 = a_1211 = a_2111, b_33 = a_1212 = a_2112 = a_1221 = a_2121. The corresponding Christoffel matrix is Γ(p) = [ b_11 p_1^2 + 2b_13 p_1p_2 + b_33 p_2^2 b_13p_1^2 + (b_33+b_12) p_1p_2 + b_23 p_2^2; b_13p_1^2 + (b_33+b_12) p_1p_2 + b_23p_2^2 b_33p_1^2 + 2b_23p_1p_2 + b_22p_2^2 ]. The slowness curve is the vanishing set of (Γ(p) - I_2) in ^2: S := {p∈^2 |(Γ(p) - I_2) = 0}. The polynomial (Γ(p) - I_2) has degree 4 in the variables p_1 and p_2, but not every monomial of degree ≤ 4 in p_1 and p_2 appears in it. In fact, (Γ(p) - I_2) = c_1p_1^4 + c_2p_1^3p_2 + c_3p_1^2p_2^2 + c_4p_1^2 + c_5p_1p_2^3 + c_6p_1p_2 + c_7p_2^4 + c_8p_2^2 + c_9, for some constants c_i ∈, i = 1…,9. These constants are not arbitrary; they have to satisfy relations like c_9 = 1, or the more vexing -4c_1^2 + 4c_1c_4c_8 - c_1c_6^2 + 8c_1c_7 - 4c_1c_8^2 - c_2^2 - 2c_2c_5 + 2c_2c_6c_8 - c_3c_6^2 - 4c_4^2c_7 + 2c_4c_5c_6 + 4c_4c_7c_8 - c_5^2 - c_6^2c_7 - 4c_7^2 = 0 in order to arise from a stiffness tensor (see <ref> and <ref>). §.§ Goals We want to know two things: * For general choices of the parameters b_ij, the curve S ⊂^2 is irreducible, even over the complex numbers. * For general choices of c_1,…,c_9 corresponding to a slowness polynomial, there is a unique set of b_ij's giving rise to the polynomial (<ref>), and this polynomial can be explicitly computed if we approximate c_1…,c_9 by rational numbers. We accomplish both of these goals by leveraging powerful results in both the theory of schemes[Schemes over a field form a category that is richer and more flexible than the corresponding category of varieties.], as developed by Alexander Grothendieck, and the application of computational techniques under the banner of Gröbner bases. §.§ Generic Irreducibility To realize our first goal, we must compactify a slowness curve and consider all slowness curves at once, in a family. This allows us to apply a suite of results from scheme theory, including “general geometric integrality”. So think now of the parameters b_ij as indeterminates, and the entries of Γ(p) as belonging to the polynomial ring A[p_0,p_1,p_2] with coefficients in the polynomial ring A := [b_11,b_12,b_13,b_22,b_23,b_33]. Then the homogenized slowness polynomial is given by P̃(p) = (Γ(p)-p_0^2I_2) = (b_11b_33-b_13^2)p_1^4 + 2(b_11b_23-b_12b_13)p_1^3p_2 + (b_11b_22-b_12^2-2b_12b_33+2b_13b_23)p_1^2p_2^2 - (b_11+b_33)p_1^2p_0^2 + 2(b_13b_22-b_12b_23)p_1p_2^3 - 2(b_13+b_23)p_1p_2p_0^2 + (b_22b_33-b_23^2)p_2^4 - (b_22+b_33)p_2^2p_0^2 + p_0^4. It is a homogeneous polynomial of degree 4 in the variables p_0, p_1 and p_2, with coefficients in the polynomial ring A. Its zero-locus thus traces a curve in the projective place ^2_A with homogeneous coordinates (p_0 : p_1 : p_2) and coefficient ring A. Since ^2_A is naturally isomorphic, as an -scheme, to the fiber product ^6_×^2_, the vanishing of P̃(p) is also naturally a hypersurface in the product of an affine space over with coordinates b_11,…,b_33 and the projective plane over with homogeneous coordinates (p_0 : p_1 : p_2). Let := {p = (p_0:p_1:p_2) ∈^2_A : P̃(p) = 0} {( (b_11,…,b_33), (p_0:p_1:p_2) ) ∈_^6×_^2 : P̃(p) = 0}. We call  the slowness bundle. Let ι_^6×_^2 be the inclusion map. Composing ι with the projection π_1_^6×_^2 →_^6 gives a fibration f := π_1∘ι→_^6 that we call the slowness curve fibration. For a point b = (b_11,…,b_33) ∈^6() = ^6, the fiber f^-1(b) is the curve of degree 4 in the projective plane ^2_ obtained by specializing the parameters b_ij in P̃(p) according to the coordinates of b. A theorem of Grothendieck known as “generic geometric integrality” <cit.> allows us to conclude that the set of points b∈^6 such that the fiber f^-1(b) is irreducible, even over , is an open subset for the Zariski topology of _^6. This leaves two tasks for us: to show that the map f satisfies the hypotheses of <cit.> (i.e., that it is proper, flat, and of finite presentation), and that the open set in _^6 furnished by generic geometric irreducibility is not empty! For the latter, because the target _^6 is an irreducible variety, it suffices to produce a single choice of parameters b_ij such that the corresponding slowness curve is irreducible over . For this final step, we use a standard number-theoretic strategy: reduction modulo a well-chosen prime. To wit, we choose a slowness polynomial P̃(p) with all b_ij∈; to check it is irreducible over , it suffices to show it is irreducible over a fixed algebraic closure  of  (see <cit.>). Furthermore, a putative factorization would have to occur already over a finite Galois field extension K⊂ of , because all the coefficients involved in such a factorization would be algebraic numbers, and therefore have finite degree over . Reducing the polynomial modulo a nonzero prime ideal in the ring of integers _K of K, by applying the unique ring homomorphism →_K/_K =: _ to its coefficients, we would see a factorization of P̃(p) in the residue polynomial ring _[p], namely, the reduction of the factorization that occurs over K. The finite field _ is an extension of the finite field _p with p elements, where ∩ = (p). In Lemma <ref>, we show that if P̃(p) is irreducible in the finite field _p^d of cardinality p^d, where d = (P̃(p)), then it is also irreducible over the finite field _, and hence is irreducible over K, hence over , hence over . (In fact, we effectively show that K =.) What makes this strategy compelling is that _p^d is finite, so checking whether P̃(p) is irreducible in _p^d[p] is a finite, fast computation in any modern computer algebra system. Readers versed in algebraic geometry might wonder if it might not be easier to use “generic smoothness” <cit.> to prove that a generic slowness polynomial is irreducible. Unfortunately, in dimensions n ∉{2,4,8}, a slowness surface is always singular <cit.>, and since n = 3 is the only interesting case from a physical point of view, we must avoid using “generic smoothness.” §.§ Irreducibility over vs. connectedness over It is possible for the set S() ⊆^2 of real points of a slowness curve S to be disconnected in the Euclidean topology, even if the algebraic variety S, considered over the complex numbers, is connected in the Zariski topology. For example, taking b_11 = 10, b_12 = 2, b_13 = 3, b_22 = 12, b_23 = 5, b_33 = 20, we obtain the slowness curve (using coordinates x = p_1 and y = p_2): S : 1 - 30 x^2 + 191 x^4 - 16 x y + 88 x^3 y - 32 y^2 + 66 x^2 y^2 + 52 x y^3 + 215 y^4 = 0. This curve has two real connected components (see Figure <ref>). However, as a complex algebraic variety, S is irreducible, hence connected. Its natural compactification in the complex projective plane is a smooth genus 3 complex curve, which is a 3-holed connected 2-dimensional real manifold. §.§ Unique reconstruction Our second goal, unique reconstruction of generic stiffness tensors, has both a theoretical facet and a computational facet, which are in some sense independent. Comparing the coefficients of (<ref>) and <ref>, after dehomogenizing by setting p_0 = 1, we want to ideally solve the system of simultaneous equations c_1 = (b_11b_33-b_13^2), c_5 = 2(-b_12b_23 + b_13b_22), c_2 = 2(b_11b_23-b_12b_13), c_6 = - 2(b_13+b_23), c_3 = (b_11b_22-b_12^2-2b_12b_33+2b_13b_23), c_7 = (b_22b_33 - b_23^2), c_4 = - (b_11+b_33), c_8 = - (b_22+b_33). That is, given constants c_1,…,c_8, we would like to determine all 6-tuples (b_11,…,b_33) that satisfy (<ref>). To this end, we homogenize the system in a slightly different way than before with a new variable r, so that all the right hand sides are homogeneous polynomials of degree 2: c_1 = (b_11b_33-b_13^2), c_5 = 2(-b_12b_23 + b_13b_22), c_2 = 2(b_11b_23-b_12b_13), c_6 = - 2r(b_13+b_23), c_3 = (b_11b_22-b_12^2-2b_12b_33+2b_13b_23), c_7 = (b_22b_33 - b_23^2), c_4 = - r(b_11+b_33), c_8 = - r(b_22+b_33), c_9 = r^2. This homogenization allows us to define a rational map of complex projective spaces g^6_[b_11,…,b_33,r] ^8_[c_1,…,c_8,c_9], [b_11,…,b_33,r] ↦ [b_11b_33 - b_13^2,…,-r(b_22 + b_33),r^2]. We would like to show that a general nonempty fiber of this map consists of exactly one point; this would mean that among all tuples (c_1,…,c_9) that are possibly the coefficients of a slowness polynomial, most tuples arise from exactly one stiffness tensor. The map g is not defined at points where the right hand sides of (<ref>) simultaneously vanish. Call this locus Π⊂^6. The algebro-geometric operation of blowing up ^6 at Π gives a scheme X := _Π(^6) together with a projection map X →^6 that resolves the indeterminacy locus of g, in the sense that the composition X →^6 ^8 can be extended to a full morphism g̃ X →^8, so that the triangle X [d] [dr]^g̃ ^6 @–>[r]^g ^8 commutes. This is what is represented by the dashed arrow. Let Y be the image of g̃. Using upper semi-continuity of the fiber dimension function for the surjective map g̃ X → Y, we show there is a Zariski open subset of Y whose fibers are zero-dimensional. Then, using upper semi-continuity of the degree function for finite morphisms, we show there is a Zariski open subset of Y whose fibers consist of precisely one point. This will complete the proof of generic unique reconstruction of stiffness tensors. As a bonus, we can use an effective version of Chevalley's Theorem <cit.>, implemented in the package ZariskiFrames <cit.> to compute equations for the image of g̃. This, for example, explains how we arrived at the constraint (<ref>) for the tuple (c_1,…,c_9). Tuples of coefficients (c_1,…,c_9) in the image of g̃ are said give rise to admissible slowness polynomials. §.§ In practice: Use Gröbner bases As a computational matter, given a specific tuple (c_1,…,c_9) stemming from an admissible slowness polynomial, reconstructing its stiffness tensor can be done essentially instantaneously using Gröbner bases. We work over the field so that we can use any one of several computational algebra systems with Gröbner bases packages, e.g., magma, Macaulay2, Singular, Maple, or Sage. Thanks to Buchberger's criterion (see <cit.>), it is possible to check the result of our calculations by hand, albeit laboriously. A (reduced) Gröbner basis for an ideal I ⊂[b_11,b_12,b_13,b_22,b_23,b_33] under the lexicographic ordering b_11 > ⋯ > b_33 is a basis for I whose leading terms generate the ideal consisting of all leading terms of all polynomials in I. In the event that there is exactly one tuple (b_11,b_12,b_13,b_22,b_23,b_33) ∈^6 that satisfies the relations defining I, this Gröbner basis will consist of precisely this tuple. For example, given the admissible slowness polynomial P̃(p) = -3625p_1^4 + 1590p_1^3p_2 + 7129p_1^2p_2^2 - 50p_1^2p_0^2 + 8866p_1p_2^3 + 304p_1p_2p_0^2 - 8049p_2^4 - 14p_2^2p_0^2 + p_0^4 the following short piece of magma code <cit.> reconstructs the components of the stiffness tensor: The computation takes less than a millisecond, and returns indicating the parameters (b_11,b_12,b_13,b_22,b_23,b_33) = (20,39,-65,-16,-87,30) of the only stiffness tensor that gives rise to the specific polynomial P̃(p) above, as the reader can check. As mentioned above, this kind of Gröbner basis computation is independent of the theoretical result asserting that a generic slowness polynomial arises from a unique stiffness tensor. In fact, these results complement each other nicely: Theorem <ref> implies that a Gröbner basis computation will succeed when applied to a generic slowness polynomial. § THE TWO-LAYER MODEL §.§ Nested domains and stiffness tensors We say that ω, Ω⊂^n are nested domains if they are smooth, strictly convex, and bounded domains such that ω̅⊂Ω. Let a and A∈n be two stiffness tensors associated to the regions ω and Ω∖ω, respectively. We call these tensors admissible nested stiffness tensors if the following conditions hold: (A1) For both tensors the largest eigenvalue of the Christoffel matrix Γ(p) is simple for all p≠0. We refer to the corresponding subset of the slowness surface as the qP-branch. (A2) The qP-branch of the slowness surface of a is inside that of A. (In other words, the slowness surfaces s, S⊂^n of a and A are the boundaries of nested domains in the sense defined above.) The two domains and the stiffness tensors are illustrated in Figure <ref>. If the stiffness tensors a and A are isotropic, then the nestedness condition above simply means that the qP wave speed of a is strictly higher than that of A. If ω and Ω are concentric balls, then the condition is equivalent with the Herglotz condition interpreted in a distributional sense; cf. <cit.>. The Herglotz condition is widely used in the theory of geometric inverse problems as a generalization of the condition that a Riemannian manifold with a boundary has no trapped geodesics. The piecewise constant stiffness tensor field corresponding to a pair of nested domains ω,Ω and admissible nested stiffness tensors a,A is the function Ω→n taking the value a in ω and A in Ω∖ω. §.§ Admissible rays Intuitively speaking, among all physically realizable piecewise linear ray paths, admissible rays are geometrically convenient ray paths. We make no claims about the amplitudes of the corresponding waves, but we expect most admissible rays to have non-zero amplitudes. We will describe separately the behaviour where the stiffness tensor is smooth and the behaviour at the interfaces ∂ω and ∂Ω. Admissible ray paths will be piecewise linear paths satisfying certain conditions. Suppose first that the stiffness tensor a(x)∈n is smooth. For every (x,p)∈ T^*Ω the Christoffel matrix Γ_a(x,p) has n positive eigenvalues, possibly with repetitions. For any m∈{1,…,n}, let G_a^m⊂ T^*M denote the subset where the m-th eigenvalue λ_a^m(x,p) of Γ_a(x,p) is simple. In this set the eigenvalue defines a smooth Hamiltonian H^m_a(x,p)=1/2[λ_a^m(x,p)^2-1]. An admissible ray path is the projection of an integral curve of the Hamiltonian flow from T^*Ω to the base Ω. (The cotangent vector on the fiber we refer to as the momentum.) In our setting a is constant, so these integral curves are straight lines with constant speed parametrization. The speed depends on direction and polarization (or the eigenvalue index m or the branch of the slowness surface — these are all equivalent). At an interface two ray paths meet. We set two conditions for the incoming and outgoing paths: (P1) Neither path is tangent to the interface. (This is convenient but ultimately unimportant.) (P2) The component of the momentum tangent to the interface is the same for both incoming and outgoing rays. The two meeting rays can be on the same or opposite sides of the interface, corresponding to reflected and refracted rays, respectively. The polarization is free to change. The outer boundary ∂Ω is also an interface. There the rays may either terminate (“refract to/from outside Ω”) or be reflected back in. An admissible ray is a piecewise linear path, and we refer to the linear segments as legs. Our definition of an admissible ray path excludes degenerate polarizations (which correspond to singular points on the slowness surface) and rays travelling along an interface. In the proof of Theorem <ref> it is irrelevant whether these are included; their exclusion is not used nor would their inclusion be an issue. Rays tangent to an interface are irrelevant in the same way, as are the rare cases where the reflection or transmission coefficient is zero despite there being a kinematically possible ray. §.§ Data We consider two kinds of data: pure travel time data (to be denoted by ) and travel time data decorated with direction information (to be denoted by ). The full data set corresponding to the four parameters (ω,Ω,a,A) is the set (ω,Ω,a,A) = { (t,x,p,y,q); x,y∈∂Ω, there is an admissible ray path from x to y with initial momentum p, final momentum q, and total length t} . The pure travel time data set without directional information is (ω,Ω,a,A) = { (t,x,y); (t,x,p,y,q)∈(ω,Ω,a,A) for some p∈ T_x^*Ω̅ and q∈ T_y^*Ω̅} . These two sets may be seen as subsets: (ω,Ω,a,A)⊂×(∂ T^*Ω̅)^2 and (ω,Ω,a,A)⊂×(∂Ω)^2. § INVERSE PROBLEMS PROOFS This section is devoted to the proof of the inverse problems results, Theorems <ref> and <ref>. We will make use of Theorems <ref> and <ref>; besides them, we only need very basic algebraic geometry. §.§ Proof of Theorem <ref> The first result follows easily from our algebraic results, Theorems <ref> and <ref> that we prove in <ref>. Theorem <ref> implies that there is an open and dense (in the Zariski sense) set W_1⊂n so that the slowness polynomial P_a is irreducible for all a∈ W_1. Theorem <ref> implies that there is an open and dense set W_2⊂n so that if P_a_1=P_a_2 for a_1,a_2∈ W_1, then a_1=a_2. The set W W_1∩ W_2 is open and dense (in the Zariski sense) in n. If a∈ W, then the slowness polynomial P_a is irreducible. The Zariski-closure of the relatively open (in the Euclidean sense) subset of the slowness surface is a subvariety of the slowness surface, and it is of full dimension. Due to irreducibility this closure is the whole slowness surface. Thus for a∈ W a small subset of the slowness surface determines the whole slowness surface, which in turn determines the stiffness tensor. The positivity property of the stiffness tensor was irrelevant. The claim remains true in the physically relevant open subset n⊂n by taking U=W∩n. §.§ Proof of Theorem <ref> This proof will rely on Theorem <ref> without the use any algebraic geometry. We will split the proof in three parts, proven separately below: * If (ω_1,Ω,a_1,A_1) ≈(ω_2,Ω,a_2,A_2), then A_1=A_2. * If (ω_1,Ω,a_1,A_1) ≈(ω_2,Ω,a_2,A_2), then ω_1=ω_2. * If (ω_1,Ω,a_1,A_1) ≈(ω_2,Ω,a_2,A_2), then a_1=a_2. Roughly speaking, we will prove the first part by studying the travel times of nearby points, the second part by varying a line segment and detecting when it hits ∂ω_i, and the third part by peeling off the top layer to get a problem on ∂ω that is similar to the first step. These parts are illustrated in Figure <ref>. For any x∈∂Ω, let ν(x) denote the inward-pointing unit normal vector to the boundary ∂Ω. Given any direction d∈^n-1, we denote ∂_dΩ = { x∈∂Ω ; d·ν(x)>0 } . This is the subset of the boundary where d points inwards. The boundary of this set, ∂∂_dΩ⊂∂Ω, is the set where d is tangent to the boundary. Due to strict convexity of Ω there is a unique τ(x,d)>0 for every x∈∂_dΩ so that x+τ(x,d)d∈∂Ω. This is the distance through Ω starting at x in the direction d. For x∈∂∂_dΩ we set τ(x,d)=0, and we do not define the function at all where d points outwards. For both i=1,2, we denote v_i(x,d) = τ(x,d) /inf{t>0;(t,x,x+τ(x,d)d)∈(ω_i,Ω,a_i,A_i)} . This can be seen as a speed through Ω starting from x and ending in the direction d from x, but not knowing the initial and final directions of the minimizing ray or whether the ray has reflected from the interfaces ∂ω or ∂Ω, or whether it has met ∂ω tangentially. This function v_i is a suitable form of data for the first two steps of the proof. Let n≥2. Let ω,Ω⊂^n be nested domains. Take any d∈^n-1 and x∈∂_dΩ. Denote y x+τ(x,d)d. Let a,A∈n be two stiffness tensors whose Christoffel matrices Γ_a(p) and Γ_A(p) have a simple largest eigenvalue for all p≠0. a) If a=A or x+d∩ω̅=∅, then the fastest admissible ray path between x and y is the qP polarized ray travelling along the straight line between the points. b) If a and A are admissible nested stiffness tensors and x+d∩ω≠∅, then the shortest travel time between x and y is strictly larger than it would be if a were equal to A. a) The qP slowness surface is strictly convex as observed in <cit.>, so the integral curves of the Hamiltonian flow do indeed minimize length. With a constant stiffness tensor this minimization property is global. b) The nestedness property of the qP branches of the slowness surfaces imply that all ray paths for the tensor a are slower than those of A in the same direction. Therefore for every admissible ray path that meets ω the total travel time is strictly bigger than the length of that piecewise smooth curve measured in the qP Finsler geometry of A. Therefore the shortest travel time of an admissible ray path has to be longer than it would be if a were changed to be equal to A. We will denote the data sets by _i(ω,Ω,a_i,A_i) and _i similarly. The set U is taken to be that provided by Theorem <ref>. The functions v_i(x,d) of (<ref>) are defined in the subset { (x,d) ∈∂Ω×^n-1 ; x∈∂_dΩ} and are continuous in a neighborhood 𝒰 of the subset τ^-1(0), corresponding to short geodesics that do not meet ω̅_i. The assumption _1 ≈_2 implies that the functions v_1 and v_2 agree in an open and dense subset of 𝒰, and by continuity they agree on all of 𝒰. Thus near the boundary we may work as if _1 = _2. In fact, these functions v_i only depend on d in 𝒰. Fix any direction d∈^n-1. By strict convexity of the nested domains ω and Ω, there is a neighborhood Y⊂∂Ω of ∂∂_dΩ so that for all x∈Ŷ Y∩∂_dΩ the ray starting at x in the direction d does not meet ω̅_1∪ω̅_2. (We remind the reader that the direction d is tangent to the boundary precisely in the set ∂∂_dΩ. Therefore in a small neighborhood of this set the line segments in the direction d through Ω are short.) By Lemma <ref> a qP polarized ray travelling from x in the direction d minimizes the travel time between x and x+τ(x,d)d. This implies that both functions v_i(,d) are constant in Ŷ. By the assumption of the agreement of the data , these two functions agree. Let us denote to shared constant value by v(d). Therefore the two models give rise to the same surfaces S^* = { v(d)d ; d∈^n-1} . The surface S^* is the strictly convex unit sphere of the Finsler geometry corresponding to the qP polarized waves; cf. <cit.>. By taking the Legendre transform, the set S^* determines the dual sphere S⊂^n in the usual sense of dual norms. This cosphere S is exactly the qP branch of the slowness surface. By assumption each A_i is in U, the open and dense set provided by Theorem <ref>. Therefore this branch of the slowness surface determines the stiffness tensor, and so A_1=A_2. We denote AA_1=A_2. Again, fix any direction d∈^n-1. Let Y_i^d⊂∂_dΩ be the subset where v_i(x,d) takes the constant value v(d); cf. part <ref> of the proof. Let us denote 𝒰_i'={(x,d);x∈ Y_i^d}. As the data is defined as subsets of (the real axis and two copies of) the set ∂Ω×^n-1, it follows from approximate equality of the data (the assumption _1≈_2) that 𝒰_1' = 𝒰_2'U' . We will use this set to describe the inner domains ω_i. It follows from Lemma <ref> and the definition of v_i as a directed travel time that if the line x+d meets ω_i, then x∉ Y_i^d, and that if it does not meet ω̅_i, then x∈ Y_i^d. The line x+d is tangent to ∂ω_i if and only if x∈∂ Y_i^d. We thus know which lines meet ω_i, and can write the smaller domain as ω_i = Ω∖⋃_(x,d)∈𝒰'_i (x+d) = Ω∖⋃_(x,d)∈𝒰' (x+d) . Therefore ω_1=ω_2 as claimed. We can rephrase the proof above loosely as follows. We may think that Y_1^d=Y_2^d (although this was not assumed to hold perfectly and for all d) and say that the two strictly convex and smooth domains ω_1 and ω_2 have the same tangent lines so they are equal. As in the previous proofs, we can essentially replace the assumption _1≈_2 with the stronger one _1=_2 because we are using open subsets of the data sets rather than relying on rare features. We omit the details in this instance for clarity. By the previous parts of the theorem, we now know that A_1=A_2A and ω_1=ω_2ω. It remains to show that a_1=a_2. As in part <ref>, it suffices to prove that some non-empty open subsets of the qP branches of the two slowness surfaces agree. Each point of ∂Ω×^n-1 defines a ray starting at the given point on ∂Ω in the given direction in ^n-1. For any x∈Ω, there is a subset F_x⊂∂Ω×^n-1 so that the corresponding rays meet x. The set F_x may be thought of as the graph of the unit vector field on ∂Ω pointing towards x. Let F_x'⊂ F_x be the subset corresponding to rays that do not meet ω before x. Let f_A^n→[0,∞) be the smooth and strictly convex norm whose unit sphere is the qP branch of the slowness surface corresponding to the stiffness tensor A. Let f^*_A be the dual norm and let ϕ_A^n→^n be the norm-preserving and homogeneous (but possibly non-linear) Legendre transformation satisfying f(p)^2=ϕ_A(p)p. For a direction v∈^n-1, let us denote Q_A(v)=ϕ_A^-1(v/f_A^*(v)). In words, Q_A(v) is the momentum corresponding to the qP polarized wave travelling in the direction v in a material given by A. The Legendre transform is depicted in Figure <ref>, where points (or arrows) on the cotangent space correspond to arrows (or points, respectively) on the tangent space. Let us then define F_x” = { (z,Q_A(v)); (z,v)∈ F_x' } . This is the set of qP momenta (instead of directions) on the boundary so that the corresponding rays meet x without hitting ω̅ first. For each (z,p)∈ F_x” the travel time (according to the Hamiltonian flow) of the qP wave from z to x is f_A^*(x-z). Now let x,y∈∂ω be two distinct points and define T_i(x,y) = inf{ t -f_A^*(x-z) -f_A^*(y-w) ; (t,z,p,w,-q)∈_i and (z,p)∈ F_x” and (w,q)∈ F_y”} . Each ray considered here starts with a qP polarized leg from a point z∈∂Ω to x∈∂ω and ends in a similar leg from y to w. As the travel times of the first and last legs are removed from the total travel time, our T(x,y) is the shortest travel time between the points x and y with an admissible ray path. Because the qP branches of the slowness surfaces of a_i and A are nested by assumption, all momenta are available for the segments of the ray path in ω starting at x and y. All the travel times and the geometry between z and x and also between y and w are the same between the two models by the previous two steps, and the only remaining dependence on i is what happens between x and y. We claim that when x and y are sufficiently close to each other, T_i(x,y) = f_a_i^*(x-y). This means that the shortest admissible ray path between x and y is the direct qP ray within ω. This is seen as follows: If a ray path has a leg in the outer layer Ω∖ω̅ between x and y (which may well happen, as we do not a priori know the geometry of the rays we are looking at), then by strict convexity of ω this leg must come all the way to the outer boundary ∂Ω. If x and y are so close to each other that f_a_i^*(x-y) is less than the f_A^*-distance between ∂ω and ∂Ω, then any leg joining ∂ω to ∂Ω takes a longer time than the straightest option through ω, despite the waves being slower in ω than in Ω∖ω̅. Within ω̅ the shortest travel time is clearly achieved by going in a straight line with the fastest polarization; cf. the Lemma <ref>. If we fix x∈∂ω, we have found that equation (<ref>) holds for all y in a small punctured neighborhood of x on ∂ω for both i=1,2. Because _1=_2 implies T_1(x,y)=T_2(x,y), we have found that there is an open set Y_x⊂∂ω so that f_a_1^*(x-y) = f_a_2^*(x-y) for all y∈ Y_x. By strict convexity of ∂ω the set Y_x contains an open set of directions, so the unit spheres of f_a_1^* and f_a_2^* agree on an open set. The same thus holds for f_a_1 and f_a_2 as well, and the claim a_1=a_2 follows from Theorem <ref>. § THE ALGEBRAIC GEOMETRY OF FAMILIES SLOWNESS SURFACES This section contains the technical algebro-geometric arguments needed to prove Theorems <ref> and <ref>; it demands more expertise from the reader than <ref>. Our arguments use only material typically covered in a first course on scheme-theoretic algebraic geometry. Standard references for this material include Hartshorne,Liu,Vakil,GWAG. We provide copious references to specific propositions and theorems to help orient readers less familiar with schemes. §.§ Independent components of a stiffness tensor: Voigt notation The major and minor symmetries of a (reduced) stiffness tensor allow for a simplification of notation that eliminates clutter, following Voigt. In dimension 2, one replaces pairs of indices ij by a single index k according to the rule 111, 222, 123. To avoid confusion, when we contract indices following this convention, we also replace the letter a with the letter b: for example, the reduced stiffness tensor component a_1112 = a_(11)(12) is replaced by b_13. In dimension 3 one replaces pairs of indices ij by a single index k according to the rule 111, 222, 333, 234, 135, 126. Thus, for example, the reduced stiffness tensor component a_2312 = a_(23)(12) is replaced by b_46. Next, we count the number of independent parameters of the form a_ijkl, or equivalently, the number of independent parameters of the form b_rs, once we take the symmetries (<ref>) into account. The set of distinct a_ijkl is in bijection with a set of unordered pairs of unordered pairs of indices {1,…,n}: more precisely, a set whose elements have the form a_(ij)(kl), where the indices belong to {1,…,n} and one can freely commute the indices within a pair of parentheses or commute the pairs, but one cannot freely move indices from one pair to another. The number of unordered pairs of indices 1,…,n is (n) 1/2n(n+1). Therefore the number of independent components of a stiffness tensor is ((n))=1/8n(n+1)(n^2+n+2). When n=2, we obtain ((2)) = 6, which matches our work in <ref>, where we saw the six independent parameters b_11, b_12, b_13, b_22, b_23, and b_33. In dimension n = 3, there are ((3)) = 21 independent stiffness tensor components. §.§ Algebro-geometric set-up In this section, we omit the positivity condition that a stiffness tensor satisfies (Definition <ref>), in order to import ideas from the scheme-theoretic formulation of algebraic geometry, following Grothendieck. §.§.§ The slowness polynomial Let A be a finitely generated -algebra, and let R := A[p_0,…,p_n] be a polynomial ring in n+1 variables with coefficients in A. We view the Christoffel matrix (<ref>) as a symmetric n× n matrix Γ(p) with entries in R, whose il-th entry is Γ(p)_il = ∑_1≤ j,k ≤ n a_ijklp_jp_k, and where the parameters a_ijkl∈ A are subject to the symmetry relations (<ref>). Denoting by I_n the n× n identity matrix over A, the slowness polynomial P(p) ∈ R is P(p) := (Γ(p)-I_n). This is a polynomial of total degree d=2n in p_1,…,p_n. The homogenized slowness polynomial P̃(p) ∈ R is obtained by setting P̃(p) := (Γ(p) - p_0^2I_n). The completed slowness hypersurface S̃ is the algebraic hypersurface in the projective space _A^n where P̃ vanishes. More precisely, the quotient ring homomorphism A[v_0,…,v_n] → A[v_0,…,v_n]/(P̃) describes a closed embedding S̃_A^n via the  construction (see, for example, <cit.>). From now on, we specialize to the case A = [a_ijkl : 1≤ i,j,k,l ≤ n], where the a_ijkl are indeterminates subject to the symmetry relations (<ref>). By <ref>, the ring A is a free -algebra on m := ψ(ψ(n))= 1/8n(n+1)(n^2+n+2) generators. Let n = 2. Then A = [a_ijkl : 1≤ i,j,k,l ≤ 2], and there are only ψ(ψ(2)) = 6 distinct a_ijkl's, which we relabel b_11, b_12, b_13, b_22, b_23, and b_33 using Voigt notation (<ref>) as we did in <ref>. Thus, A is the polynomial ring [b_11,b_12,b_13,b_22,b_23,b_33], and the Christoffel matrix Γ(p) is given as in (<ref>), with associated homogenized slowness polynomial P̃(p) as in (<ref>). The associated completed slowness hypersurface  is a quartic curve on ^2_A defined by the condition P̃(p) = 0. §.§.§ The slowness fibration Generalizing Example <ref>, the homogenized slowness polynomial can be viewed as a homogeneous polynomial of degree 2n in the graded ring A[v_0,…,v_n], where A = [b_ij : 1≤ i ≤ j ≤(n)], a polynomial ring in m variables. From this perspective, the completed slowness hypersurface may be viewed as a hypersurface in the product of an affine space and a projective space: = {P̃(b) = 0}⊂^n_A _^m×_^n = A ×_[v_0,…,v_n]. We call  the slowness bundle, and denote this closed immersion by ι_^m×_^n. Composing ι with the projection π_1_^m×_^n →_^m onto the first factor gives a fibration f := π_1∘ι→_^m that we call the slowness surface fibration. The fiber f^-1(b) of f above a rational point b∈^m() = ^m is the hypersurface of degree 2n in ^n_ obtained by specializing the parameters b_ij according to the coordinates of b. For a field extension K/, we write f_K S_K →_K^m for the slowness surface fibration obtained as above after replacing  by K everywhere. This is known in algebraic geometry as the “base-extension of the morphism f by the map K →”. We are mostly interested in the cases K = and K =. We call f__→_^m the complexified slowness surface fibration. §.§ Key results The precise result underpinning Theorem <ref> is the following. The set (f) := {b∈^m_ : f_^-1(b) } is Zariski-open in ^m_. Consequently, it is either empty, or it is the complement of a finite union of algebraic varieties, each of dimension ≤ m-1. Theorem <ref> follows from the following result, due to Grothendieck, which uses the full power of scheme theory. Let f X → Y be a morphism of schemes. Assume that f is proper, flat, and of finite presentation. Then the set of y ∈ Y such that the fiber X_y := f^-1(y) is geometrically irreducible is Zariski open in Y. See <cit.>. In the event that Y is a locally Noetherian scheme, one can replace the condition “of finite presentation” with “of finite type”<cit.>. However, this condition is in turn subsumed by the properness condition (by definition of properness!). Since the coordinate ring A of the affine space _^m is Noetherian, the scheme _^m is locally Noetherian. By Remark <ref>, to deduce Theorem <ref> from Theorem <ref>, we must show that the slowness surface fibration f →_^m is a proper, flat morphism. We say a few words about what these conditions mean first. In algebraic geometry, the notion of properness mimics the analogous notion between complex analytic spaces: the preimage of a compact set is compact. In particular, a proper morphism takes closed sets to closed sets; see <cit.>. Flatness is an algebraic condition that, in conjunction with properness and local Noetherianity of the target, guarantees the nonempty fibers of f vary nicely (e.g., they all have the same Euler characteristic); see <cit.>. To prove that the slowness surface fibration f→_^m is flat, we shall use the “miracle flatness” criterion. Let f X → Y be a morphism of finite type, equidimensional schemes over a field. Suppose that X is Cohen-Macaulay, Y is regular, and the fibers of f have dimension X - Y. Then f is a flat morphism. See <cit.>. The slowness surface fibration f→_^m is a proper flat morphism. If K/ is a field extension, the same conclusion holds for the base-extension f_K S_K →_K^m. First, we prove that f is proper. The scheme ^n_ being projective, its structure morphism ^n_→ is proper. Consider the fibered product diagram _^m×__^n [r]^π_2[d]^π_1 _^n [d] _^m[r] Proper morphisms are stable under base change <cit.>, and hence π_1 is proper. Closed immersions being proper <cit.>, the morphism ι S ^n_A is also proper. Finally, a composition of proper morphisms is proper <cit.>, whence f = π_1∘ι is proper. Next, we show that the morphism f→_^m is flat via Theorem <ref>. The schemes  and _^m are of finite type over a field and equidimensional ( is a hypersurface in ^n_A), so it suffices to verify that  is a Cohen–Macaulay scheme, that _^m is regular, and that the fibers of f all have dimension n-1. The surface  is a hypersurface in a projective space, so it is a local complete intersection, hence a Cohen-Macaulay scheme <cit.>. The affine space _^m is smooth, hence regular; the fibers of f are all hypersurfaces of ^n_, because the coefficient of p_0^2 in the defining equation of every fiber is non-zero, and hence all have dimension n-1. Hence, the morphism f is flat. The claim for the base-extension f_K S_K →_K^m follows either by replacing  with K in the above arguments, or by noting that proper and flat are properties of morphisms that are stable under base-extension (see, e.g., <cit.>). The conclusion that (f) is Zariski open in _^m follows from Theorem <ref> and Proposition<ref>, taking into account Remark <ref>. It follows that (f) is either empty, or it is the complement of a proper closed subset of _^m. Such a set is determined by an ideal I ⊆[b_ij : 1 ≤ i,j ≤(n) ] <cit.>. The base-extension I_ = I⊗_ determines a proper closed subset of _^m, whose finitely many irreducible components have dimension ≤ m - 1. This closed subset descends to finitely many irreducible components in _^m, consisting of complex conjugate pairs of irreducible varieties in _^m. §.§ Ex uno plura All our work so far does not preclude the possibility that the Zariski open subset (f) of _^m defined in Theorem <ref> is empty! We verify that this is not the case in dimensions n ∈{2, 3} by giving examples of slowness polynomials that are irreducible over . We use a standard arithmetic trick: reduction modulo a prime. The principle involved is simple: if a polynomial F(x_0,…,x_n) with coefficients in  factors nontrivially, then it also factors when we reduce its coefficients modulo any prime p. Thus, if a polynomial with coefficients in  is irreducible when considered over the finite field _p, then it must be irreducible over . This principle is extraordinarily useful, because by finiteness of _p checking whether the reduction F̅(x_0,…,x_n) ∈_p[x_0,…,x_n] is irreducible is a finite, fast computation. Guaranteeing that the polynomial remains irreducible when considered over  requires working over a finite extension _p^d of _p with controlled degree d. We make this idea explicit in the following lemma, whose proof we include for lack of a good reference. Let F(x_0,…,x_n) ∈[x_0,…,x_n] be a homogeneous polynomial of degree d. Suppose there is a prime p such that the reduction F̅(x_0,…,x_n) ∈_p[x_0,…,x_n] of F modulo p is irreducible in the finite field _p^d of cardinality p^d. Then F(x_0,…,x_n) is irreducible in [x_0,…,x_n]. By <cit.>, to prove that the polynomial F is irreducible over , it suffices to show that it is irreducible in [x_0,…,x_n], where denotes a fixed algebraic closure of . The field  consists of all algebraic numbers: the roots of single-variable polynomials with rational coefficients; it is countable. There is a Galois field extension K/ of finite degree where F already factors into -irreducible polynomials. To see this, note that each coefficient of each factor in a  factorization is an algebraic number, hence has finite degree over ; we can let K be the Galois closure of the field obtained from  by adjoining all the coefficients of all the factors of F over  (see also <cit.>). Let  be a prime ideal in the ring of integers _K of K lying over p, i.e., ∩ = (p). The field _ := _K/_K is a finite field extension of _p. Let F̅ = g_1,…,g_m be a factorization of F̅ in _[x_0,…,x_n]. The Galois group G := (_/_p) acts on the set {g_1,…,g_m}. The orbits of this action correspond to the irreducible factors of the reduction of F̅∈_p[x_0,…,x_n]. This reduction is irreducible, because by hypothesis F̅ is irreducible over the larger field _p^d, so the action of G on {g_1,…,g_m} is transitive. It follows that the factors {g_1,…,g_m} of F̅ must all have the same degree, and hence m | d. By the orbit-stabilizer theorem, the stabilizer H_g_i≤ G of g_i has index m; it is a normal subgroup of G because G is cyclic. By Galois theory, the polynomial F̅ already factors over the fixed field K()^H_g_i_p^m as g_i· h_i for some h_i ∈_p^m[x_0,…,x_n]. However, by hypothesis, the polynomial F̅∈_p[x_0,…,x_n] is irreducible over _p^d, and hence is irreducible over _p^m, because _p^m⊂_p^d as m | d. This implies that m = 1, i.e., F̅ is irreducible in _[x_0,…,x_n], and therefore F irreducible in K[x_0,…,x_n]. By definition of the field K, we conclude that F is irreducible in [x_0,…,x_n]. The hypothesis that F ∈[x_0,…,x_n] is homogeneous can be weakened. We used this hypothesis tacitly above: we assumed that the reduction F̅∈_[x_0,…,x_n] has degree d. This is certainly the case if F is homogeneous and F̅ is irreducible (and hence nonzero). Let n=2. Using the notation of Example <ref>, consider the stiffness tensor with components b_11 = 20, b_12 = 39, b_13 = -65, b_22 = -16, b_23 = -87, b_33 = 30. The corresponding homogenized slowness polynomial P̃(p) = -3625p_1^4 + 1590p_1^3p_2 + 7129p_1^2p_2^2 - 50p_1^2p_0^2 + 8866p_1p_2^3 + 304p_1p_2p_0^2 - 8049p_2^4 - 14p_2^2p_0^2 + p_0^4 is irreducible over : apply Lemma <ref> with d = 4 and p = 7: a magma calculation shows that the reduction of this polynomial modulo 7 is irreducible in the finite field _7^4; see <cit.>. Let n=3. Using Voigt notation (<ref>), consider the stiffness tensor of albite, an abundant feldspar mineral in the Earth's crust, which has the components <cit.> b_11 = 691, b_12 = 340, b_13 = 308, b_14 = 51, b_15 = -24, b_16 = -9, b_22 = 1835, b_23 = 55, b_24 = -39, b_25 = -77, b_26 = -58, b_33 = 1795, b_34 = -87, b_35 = 71, b_36 = -98, b_44 = 249, b_45 = -24, b_46 = -72, b_55 = 268, b_56 = 5, b_66 = 335. The corresponding homogenized slowness polynomial P̃(p) = 61808197p_1^6 - 29183112p_1^5p_2 + 12224260p_1^5p_3 + 295917556p_1^4p_2^2 - 121937730p_1^4p_2p_3 + 348169660p_1^4p_3^2 - 505771p_1^4p_0^2 - 70975626p_1^3p_2^3 + 152129354p_1^3p_2^2p_3 - 119421358p_1^3p_2p_3^2 + 155018p_1^3p_2p_0^2 - 174550934p_1^3p_3^3 - 8528p_1^3p_3p_0^2 + 383486749p_1^2p_2^4 - 300844962p_1^2p_2^3p_3 + 1468226482p_1^2p_2^2p_3^2 - 1740692p_1^2p_2^2p_0^2 - 272180462p_1^2p_2p_3^3 + 436994p_1^2p_2p_3p_0^2 + 404080725p_1^2p_3^4 - 1875763p_1^2p_3^2p_0^2 + 1294p_1^2p_0^4 - 13416750p_1p_2^5 + 282989760p_1p_2^4p_3 + 154078108p_1p_2^3p_3^2 + 134674p_1p_2^3p_0^2 + 67718200p_1p_2^2p_3^3 - 536166p_1p_2^2p_3p_0^2 + 82126914p_1p_2p_3^4 + 44930p_1p_2p_3^2p_0^2 - 182p_1p_2p_0^4 - 99344136p_1p_3^5 + 422102p_1p_3^3p_0^2 - 50p_1p_3p_0^4 + 141880986p_2^6 - 135372072p_2^5p_3 + 1205554155p_2^4p_3^2 - 1144986p_2^4p_0^2 - 88997104p_2^3p_3^3 + 413432p_2^3p_3p_0^2 + 959527532p_2^2p_3^4 - 4481283p_2^2p_3^2p_0^2 + 2419p_2^2p_0^4 - 38299560p_2p_3^5 + 167364p_2p_3^3p_0^2 - 242p_2p_3p_0^4 + 115762815p_3^6 - 981561p_3^4p_0^2 + 2312p_3^2p_0^4 - p_0^6 is irreducible over : apply Lemma <ref> with d = 6 and p = 5: a magma calculation shows that the reduction of this polynomial modulo 7 is irreducible in the finite field _5^6; see <cit.>. By Theorem <ref> we know that the subset (f) of the parameter space of stiffness tensors whose corresponding homogenized slowness polynomials are irreducible over  is a Zariski open subset of _^m. Since _^m is irreducible, a Zariski open subset is dense, as long as it is not empty. Example <ref> shows that (f) is not empty when n = 2, and Example <ref> is nonempty when n = 3. §.§ Generic unique reconstruction of stiffness tensors We prove Theorem <ref>, i.e., that a generic stiffness tensor in dimensions 2 and 3 is uniquely associated to its slowness polynomial. While the proof of this fact uses heavy-duty machinery from algebraic geometry, we may perform the reconstruction of a stiffness tensor from a particular slowness polynomial quickly in practice, using simple ideas from the theory of Gröbner bases. We begin with the case n=2 and explain the necessary modifications for n=3 case at the end of the proof. Using the notation of <ref>, we define the rational map (<ref>) between complex projective spaces g^6_[b_11,…,b_33,r] ^8_[c_1,…,c_9], [b_11,…,b_33,r] ↦ [b_11b_33 - b_13^2,…,-r(b_22 + b_33),r^2]. The closed subset Π of ^6 where g is not defined is 2-dimensional, although we will not use this fact explicitly. Let X:= _Π(^6) be the blow-up of ^6 along Π <cit.>. This scheme comes with a morphism π X →^6 such that the composition f := g∘π X →^8 is a proper morphism, and such that X∖π^-1(Π) ^6 ∖Π. Let Y be the image of f, so that the map f X → Y is a surjective proper morphism. Properness ensures that the fiber dimension function h Y → y ↦ f^-1(y) is upper semi-continuous, i.e., for each x∈ the set h^-1((-∞,x)) is Zariski open <cit.>. In particular, if there is a point y ∈ Y such that f^-1(y) = 0, then there is a nonempty Zariski open subset V⊆ Y over which all fibers are 0-dimensional. (Note that both X and Y are irreducible varieties.) The Gröbner basis calculation in dimension 2 in <ref> shows precisely that such a point y ∈ Y exists. The fibers over V have finite cardinality, so the induced morphism f U := f^-1(V) → V is quasi-finite. It is also proper, as it is a base-extension of a proper morphism. A proper, quasifinite morphism is finite <cit.>. Finally, the fiber degree function is also an upper semi-continuous function on the target of a finite morphism <cit.>. Our Gröbner basis calculation also shows that there is a point u ∈ U such that f^-1(f(u)) consists of a single point. So by upper semi-continuity, there is a Zariski open subset V' ⊆ V such that, for all y ∈ V', the fiber f^-1(y) consists of exactly one point. We conclude that the map f X → Y is generically injective. Note that the locus where r = 1 is the distinguished dense open affine chart D_+(r) ⊂^6, and that f and g coincide on D_+(r)∩ (^6 ∖Π), so f is still generically injective after “dehomogenizing r”. This concludes the proof of the Theorem in the case n = 2. The argument in dimension 3 is analogous, but there are more parameters to the stiffness tensor, as well as coefficients in the corresponding slowness polynomial. The map (<ref>) is thus replaced by a higher-dimensional version g^21^49. We need only check that there is a point x ∈ X in the domain of the corresponding map f X → Y such that f^-1(f(x)) consists of a single point. We use the slowness polynomial P̃(p) of Example <ref>: we give code in Appendix <ref> that shows there is exactly one stiffness tensor associated to P̃(p). The proof of Theorem <ref> works in dimension n provided one has a single example of a slowness polynomial in dimension n that arises from a unique stiffness tensor. §.§ Which polynomials are slowness polynomials? In dimension 2, we have seen (<ref>) that the slowness polynomial has the form c_1p_1^4 + c_2p_1^3p_2 + c_3p_1^2p_2^2 + c_4p_1^2 + c_5p_1p_2^3 + c_6p_1p_2 + c_7p_2^4 + c_8p_2^2 + c_9 for some (c_1,…,c_9) ∈^9. However, not every polynomial of this kind arises from a stiffness tensor. For example, a close inspection of (<ref>) shows that we must have c_9 = 1. Furthermore, the remaining coefficients c_1,…,c_8 are subject to the relations (<ref>). We can use elimination theory to compute the exact set of constraints that must be satisfied by c_1,…,c_8 (implicitly, from now on we simply take for granted that c_9 = 1). As a by-product, we shall obtain a second proof of Theorem <ref> in dimension n = 2. While in principle a similar argument could be used in the case n = 3, the required computations are currently infeasible. Let X be the variety in the affine space _^14 with coordinates b_11,…,b_33,c_1,…,c_8 cut out by the equations (<ref>). More precisely, X is [b_11,…,b_33,c_1,…,c_8]/I, where I is the ideal of [b_11,…,b_33,c_1,…,c_8] given by I := ⟨ c_1 - (b_11b_33-b_13^2), c_2 - 2(b_11b_23-b_12b_13), c_3 - (b_11b_22-b_12^2-2b_12b_33+2b_13b_23), c_4 + (b_11+b_33), c_5 - 2(-b_12b_23 + b_13b_22), c_6 + 2(b_13+b_23), c_7 - (b_22b_33 - b_23^2), c_8 + (b_22+b_33) ⟩. We consider the two projections _^14 = [b_11,…,b_33,c_1,…,c_8] [rr]^p [d]^q _^6 = [b_11,…,b_33] _^8 = [c_1,…,c_8] and by a slight abuse of notation, we also denote their restrictions to X by p X →_^8 and q X→_^6. An elementary but important observation is that q X →_^6 is an isomorphism, because the ring map [b_11,…,b_33,c_1,…,c_8] →[b_11,…,b_33] that sends b_ij to itself and maps c_i according to the relations (<ref>) (so, e.g., c_1 maps to b_11b_33-b_13^2) is surjective and has kernel I. This tells us that X is a 6-dimensional complex algebraic variety. We now turn to the projection p X →_^8. The image p(X) consists of 8-tuples (c_1,…,c_8) that, together with c_9 = 1, give a set of coefficients of a polynomial that is the slowness polynomial of at least one stiffness tensor (not necessarily positive) in dimension 2. By <cit.>, the Zariski closure of the image of p is cut out by the elimination ideal J := I ∩[c_1,…,c_8] ⊆[c_1,…,c_8], and a basis for this ideal can be extracted from an appropriate Gröbner basis for I by elimination theory (e.g., <cit.>). A magma calculation <cit.> shows that J = ⟨ -16c_1^2c_3 + 4c_1c_2^2 - 8c_1c_2c_5 + 16c_1c_3c_4c_8 - 4c_1c_3c_6^2 + 32c_1c_3c_7 - 16c_1c_3c_8^2 - 12c_1c_5^2 + 16c_1c_5c_6c_8 - 16c_1c_6^2c_7 - 4c_2^2c_4c_8 + c_2^2c_6^2 - 12c_2^2c_7 + 4c_2^2c_8^2 + 16c_2c_4c_6c_7 - 2c_2c_5c_6^2 - 8c_2c_5c_7 - 4c_3c_4^2c_7 + 4c_3c_4c_7c_8 - c_3c_6^2c_7 - 4c_3c_7^2 + c_4^2c_5^2 - c_4c_5^2c_8 + c_5^2c_6^2 + 4c_5^2c_7, -4c_1^2 + 4c_1c_4c_8 - c_1c_6^2 + 8c_1c_7 - 4c_1c_8^2 - c_2^2 - 2c_2c_5 + 2c_2c_6c_8 - c_3c_6^2 - 4c_4^2c_7 + 2c_4c_5c_6 + 4c_4c_7c_8 - c_5^2 - c_6^2c_7 - 4c_7^2 ⟩. With such an explicit description of J, it is possible to compute the dimension of Y := p(X) = [c_1,…,c_8]/J. A magma computation shows that the dimension is 6, which is the same dimension of X. One can go further and compute the image p(X), and not simply its Zariski closure, using an effective version of Chevalley's Theorem, which asserts that the set-theoretic image p(X) is a constructible set <cit.>. This way we obtain necessary and sufficient conditions on (c_1,…,c_8) so that (<ref>) is the slowness polynomial for a stiffness tensor (note, however, that our algebro-geometric set-up does not take into account the positivity condition that must be satisfied by a physical stiffness tensor). For an ideal I' ⊂[c_1,…,c_8], write V(I') for the affine variety cut out in ^8_ by the ideal I'. Then, using the package ZariskiFrames <cit.>, we compute that p(X) = (V(I_1)∖ V(J_1)) ∪ (V(I_2)∖ V(J_2)) ∪ (V(I_3)∖ V(J_3)), where I_1 = J, J_1 = ⟨ c_4^2 - 2c_4c_8 + c_6^2 + c_8^2, -c_2c_6 + 2c_3c_4 - 2c_3c_8 - 4c_4c_7 + 3c_5c_6 + 4c_7c_8, c_2^2 - 6c_2c_5 + 4c_3^2 - 16c_3c_7 + 9c_5^2 + 16c_7^2, -3c_1c_6 + 2c_2c_4 - 2c_2c_8 + c_3c_6 + c_6c_7, -c_1c_6 - c_3c_6 + 2c_4c_5 - 2c_5c_8 + 3c_6c_7, 2c_1c_4 - 2c_1c_8 + c_2c_6 - 2c_4c_7 + c_5c_6 + 2c_7c_8, c_1c_3 - 2c_1c_7 - c_2c_5 + c_3^2 - 5c_3c_7 + 3c_5^2 + 6c_7^2, c_1c_2 - 3c_1c_5 + c_2c_3 - 3c_2c_7 + c_3c_5 + c_5c_7, c_1^2 - 2c_1c_7 + 2c_2c_5 - c_3^2 + 4c_3c_7 - 2c_5^2 - 3c_7^2 ⟩, I_2 = ⟨ c_4^2 - 2c_4c_8 + c_6^2 + c_8^2, -c_1c_6 - c_3c_6 + 2c_4c_5 - 2c_5c_8 + 3c_6c_7, -c_2c_6 + 2c_3c_4 - 2c_3c_8 - 4c_4c_7 + 3c_5c_6 + 4c_7c_8, -3c_1c_6 + 2c_2c_4 - 2c_2c_8 + c_3c_6 + c_6c_7, 2c_1c_4 - 2c_1c_8 + c_2c_6 - 2c_4c_7 + c_5c_6 + 2c_7c_8, c_1c_6^2 - 16c_2c_5 + 4c_2c_6c_8 + 8c_3^2 - c_3c_6^2 - 32c_3c_7 + 16c_5^2 + 4c_5c_6c_8 - 7c_6^2c_7 + 32c_7^2, 16c_1c_5 - 8c_1c_6c_8 - 4c_2c_3 + c_2c_6^2 + 8c_2c_7 - 4c_3c_5 + 8c_4c_6c_7 - c_5c_6^2 - 8c_5c_7, c_1c_3 - 2c_1c_7 - c_2c_5 + c_3^2 - 5c_3c_7 + 3c_5^2 + 6c_7^2, c_2^2 - 6c_2c_5 + 4c_3^2 - 16c_3c_7 + 9c_5^2 + 16c_7^2, c_1c_2 - 3c_1c_5 + c_2c_3 - 3c_2c_7 + c_3c_5 + c_5c_7, c_1^2 - 2c_1c_7 + 2c_2c_5 - c_3^2 + 4c_3c_7 - 2c_5^2 - 3c_7^2 ⟩, J_2 = ⟨ c_4^2 - 2c_4c_8 + c_6^2 + c_8^2, 4c_3 - 2c_4c_8 - c_6^2 + 24c_7 - 6c_8^2, 2c_2 - c_4c_6 + 2c_5 - c_6c_8, 4c_1 - 2c_4c_8 + c_6^2 - 4c_7 + 2c_8^2, 8c_4c_7 - 2c_4c_8^2 - 2c_5c_6 + c_6^2c_8 - 8c_7c_8 + 2c_8^3, 2c_4c_5 - c_4c_6c_8 - 2c_5c_8 + 8c_6c_7 - c_6c_8^2, 4c_5^2 - 4c_5c_6c_8 + c_6^2c_8^2 + 64c_7^2 - 32c_7c_8^2 + 4c_8^4 ⟩, I_3 = ⟨ c_8^2-4c_7,c_6c_8-2c_5,c_4c_8-c_1-c_3-c_7,2c_6c_7-c_5c_8, c_6^2+2c_1-2c_3+2c_7,c_4c_6-2c_2,c_4^2-4c_1 ⟩, J_3 = ⟨ 1⟩. Note that V(J_3) = ∅. Since p X → Y is a dominant morphism of integral schemes of finite type over a field, both of the same dimension, Chevalley's theorem <cit.> implies that there is a Zariski open subset U ⊂ Y such that the fiber p^-1(u) for u ∈ U is a finite set. In other words, for each u ∈ U, there are only finitely many possible values of b_11,…,b_33 such that the relations (<ref>) hold; more plainly, there are only finitely many stiffness tensors associated to a slowness polynomial corresponding to a point u ∈ U. It is possible to choose U so that the number of stiffness tensors is constant as one varies u ∈ U. This constant is the degree of the map p, which is equal to the degree of the function field extension [(X):(Y)]. We use magma to compute this quantity and show that it is 1; see <cit.>. The computation in fact gives explicit expressions for b_11,…,b_33 in terms of c_1,…,c_8. It shows that the map p X → Y is a surjective, birational morphism, i.e., p has an inverse defined on a Zariski open subset of Y. The case n = 3 of Theorem <ref> can in principle be proved using the same template as above. However, the symbolic computations required when computing Gröbner bases are well beyond the capabilities of modern-day desktop computers. The slowness polynomials involved have 50 monomials, with coefficients c_1,…,c_50, and the stiffness tensor has 21 components b_11,…,b_66. The analogous correspondence diagram for n = 3 has the form _^71 = [b_11,…,b_66,c_1,…,c_50] [d]^p [rr]^q _^21 = [b_11,…,b_66] _^50 = [c_1,…,c_50] Using the map q as before we can show that the variety X ⊂_^71 parametrizing slowness polynomials in terms of stiffness tensors has dimension 21. As before the closure Y = p(X) of the image of p could in principle be computed using elimination theory. This would give a set of polynomials generating an ideal J describing the closure of the image p(X). §.§ Stiffness tensors with orthorhombic symmetry Full anisotropy of a stiffness tensor is not an essential hypothesis in the algebro-geometric content of this paper. We illustrate this principle by showing that a slowness surface corresponding to a generic stiffness tensor of a material with orthorhombic symmetries is irreducible. In contrast to the case of a generic fully anisotropic tensor, a slowness surface associated to a generic orthorhombic tensor can have up to four stiffness tensors associated with it. In <cit.>, Helbig and Carcione give sufficient conditions for this phenomenon to occur. We show here their conditions are also necessary in the generic case. As with triclinic media, Gröbner bases can be used to perform the explicit reconstruction of the possible stiffness tensors. An orthorhombic stiffness tensor is a stiffness tensor a = (a_ijkl)∈^3× 3× 3× 3 such that a_1123 = a_1113 = a_1112 = a_2223 = a_2213 = a_2212 = a_3323 = a_3313 = a_3312 = a_2313 = a_2312 = a_1312 = 0. Using Voigt notation (<ref>), such a tensor has 21 components b_ij, 1 ≤ i ≤ j ≤ 6, but b_13 = b_14 = b_15 = b_24 = b_25 = b_26 = b_34 = b_35 = b_36 = b_45 = b_46 = b_56 = 0, leaving at most 9 independent components b_11, b_12, b_13, b_22, b_23, b_33, b_44, b_55, b_66. The Christoffel matrix of an orthorhombic stiffness tensor is Γ(p) = [ b_11p_1^2 + b_66p_2^2 + b_55p_3^2 (b_12 + b_66)p_1p_2 (b_13 + b_55)p_1p_3; (b_12 + b_66)p_1p_2 b_66p_1^2 + b_22p_2^2 + b_44p_3^2 (b_23 + b_44)p_2p_3; (b_13 + b_55)p_1p_3 (b_23 + b_44)p_2p_3 b_55p_1^2 + b_44p_2^2 + b_33p_3^2 ] We modify this polynomial by multiplying its terms by powers of a new variable p_0 to make all terms of the polynomial to have the same degree. The homogenized slowness polynomial of such a tensor has the form P̃(p) = (Γ(p - p_0^2I_3) = c_1p_1^6 + c_2p_1^4p_2^2 + c_3p_1^4p_3^2 + c_4p_1^4p_0^2 + c_5p_1^2p_2^4 + c_6p_1^2p_2^2p_3^2 + c_7p_1^2p_2^2p_0^2 + c_8p_1^2p_3^4 + c_9p_1^2p_3^2p_0^2 + c_10p_1^2p_0^4 + c_11p_2^6 + c_12p_2^4p_3^2 + c_13p_2^4p_0^2 + c_14p_2^2p_3^4 + c_15p_2^2p_3^2p_0^2 + c_16p_2^2p_0^4 + c_17p_3^6 + c_18p_3^4p_0^2 + c_19p_3^2p_0^4 + c_20p_0^6, where, for example, we have c_7 = -b_11b_22 - b_11b_44 + b_12^2 + 2b_12b_66 - b_22b_55 - b_44b_66 - b_55b_66. The slowness bundle  is naturally a hypersurface in the product of a 9-dimensional affine space ^9_ with coordinates b_11,…,b_66 and the projective space ^3_ with homogeneous coordinates (p_0:p_1:p_2:p_3) := {P̃(p) = 0}⊂^9_×^3_. As before, the composition of the inclusion ι↪^9_×^3_ with the projection π_1 ^9_×^3_→^9_ gives rise to the slowness surface fibration f := π_1∘ι→^9_ The slowness polynomial associated to a generic orthorhombic stiffness tensor is irreducible over . A generic geometric integrality argument, following the proof of Theorem <ref> shows that the set of b∈^9_ such that the complexified fiber f_^-1(b) ⊂^3_ is an irreducible surface forms a Zariski open subset of the parameter space ^9_. All that remains to show is that this set is not empty, by producing a single orthorhombic stiffness tensor with an associated slowness polynomial that is irreducible. Consider the orthorhombic stiffness tensor obtained by rounding out values for the stiffness tensor of olivine <cit.>, a common mineral in the Earth's mantle: b_11 = 321, b_12 = 68, b_13 = 72, b_22 = 197, b_23 = 77, b_33 = 234, b_44 = 64, b_55 = 77, b_66 = 79. Its corresponding homogenized slowness polynomial is P̃(p) = 1952643p_1^6 + 5308889p_1^4p_2^2 + 6230406p_1^4p_3^2 - 56159p_1^4p_0^2 + 4261967p_1^2p_2^4 + 9884047p_1^2p_2^2p_3^2 - 94721p_1^2p_2^2p_0^2 + 5189310p_1^2p_3^4 - 108883p_1^2p_3^2p_0^2 + 477p_1^2p_0^4 + 996032p_2^6 + 3365543p_2^4p_3^2 - 33227p_2^4p_0^2 + 3517205p_2^2p_3^4 - 73952p_2^2p_3^2p_0^2 + 340p_2^2p_0^4 + 1153152p_3^6 - 37922p_3^4p_0^2 + 375p_3^2p_0^4 - p_0^6. which is irreducible over  by Lemma <ref>, applied with d = 6 and p = 5; see <cit.>. As mentioned in <ref>, a general orthorhombic slowness polynomial can arise in more than one way from an orthorhombic stiffness tensor. We make this idea precise by proving Theorem <ref>. Inspection of the relations of the form (<ref>) for c_1,…,c_20 suggest that, up to a global scalar, the nine coefficients c_1,c_4,c_10,c_11,c_13,c_16,c_17,c_18,c_19 uniquely determine the quantities b_11,b_22,b_33,b_44,b_55,b_66. More precisely, we have c_1 = b_11b_55b_66, c_4 = -(b_11b_55 + b_11b_66 + b_55b_66), c_10 = b_11 + b_55 + b_66, c_11 = b_22b_44b_66, c_13 = -(b_22b_44 + b_22b_66 + b_44b_66), c_16 = b_22 + b_44 + b_66, c_17 = b_33b_44b_55, c_18 = -(b_33b_44 + b_33b_55 + b_44b_55), c_19 = b_33 + b_44 + b_55. Homogenizing the right hand sides above to make sure they all have degree 3, by introducing an extra variable r, we obtain c̃_1 = b_11b_55b_66, c̃_4 = -(b_11b_55 + b_11b_66 + b_55b_66)r, c̃_10 = (b_11 + b_55 + b_66)r^2, c̃_11 = b_22b_44b_66, c̃_13 = -(b_22b_44 + b_22b_66 + b_44b_66)r, c̃_16 = (b_22 + b_44 + b_66)r^2, c̃_17 = b_33b_44b_55, c̃_18 = -(b_33b_44 + b_33b_55 + b_44b_55)r, c̃_19 = (b_33 + b_44 + b_55)r^2. This allows us to define a rational map of projective spaces g ^6 ^8 [b_11,b_22,b_33,b_44,b_55,b_66,r] ↦ [c̃_1,c̃_4,c̃_10,c̃_11,c̃_13,c̃_16,c̃_17,c̃_18,c̃_19] Now we proceed as in the proof of Theorem <ref>: after resolving the indeterminacy locus[This locus has dimension 3, as one can verify with magma, for example.] Π of g through a blow-up process to get a surjective proper morphism f X → Y, upper semi-continuity of fiber dimension together with upper semi-continuity of degree for finite morphisms show there is a Zariski open subset of Y over which all fibers consist of a single point. This subset is not empty (and therefore is Zariski dense) because a Gröbner basis calculation shows that the nine coefficients of (<ref>) c_1 = 1952643, c_11 = 996032, c_17 = 1153152, c_4 = -56159, c_13 = -33227, c_18 = -37922, c_10 = 477, c_16 = 340, c_19 = 375 give rise to a unique set of values of b_11,b_22,b_33,b_44,b_55,b_66, namely those in (<ref>); see <cit.>. Next, we note that c_15 = b_23^2 + 2b_23b_44 - b_22b_33 - b_22b_55 - b_33b_66 - b_44b_55 - b_44b_66, so if we know c_15 and b_11,b_22,b_33,b_44,b_55,b_66, then there are two possible values for b_23, obtained by solving the above equation, interpreted as a quadratic in the single variable b_23. Similarly, the relations c_7 = b_12^2 + 2b_12b_66 -b_11b_22 - b_11b_44 - b_22b_55 - b_44b_66 - b_55b_66, c_9 = b_13^2 + 2b_13b_55 -b_11b_33 - b_11b_44 - b_33b_66 - b_44b_55 - b_55b_66 show that there are two possible values each for b_13 and b_12. This seems to suggest that there are up to eight different stiffness tensors that can give rise to an orthorhombic slowness surface. However, the solutions to the three quadratic equations above are coupled, and there are only four possible triples (b_12,b_13,b_23) for a given set of coefficients c_1,…,c_20. Put differently: b_12 is determined by the values of b_13 and b_23: to see this, we consider the ideal generated by the relations of the form (<ref>) for c_2, c_5, c_6 and c_7 ⟨ c_2 - (b_11b_22b_55 + b_11b_44b_66 - b_12^2b_55 - 2b_12b_55b_66), c_5 - (b_11b_22b_44 - b_12^2b_44 - 2b_12b_44b_66 + b_22b_55b_66), c_6 - (b_11b_22b_33 - b_11b_23^2 - 2b_11b_23b_44 - b_12^2b_33 + 2b_12b_13b_23 + 2b_12b_13b_44 + 2b_12b_23b_55 - 2b_12b_33b_66 + 2b_12b_44b_55 - b_13^2b_22 - 2b_13b_22b_55 + 2b_13b_23b_66 + 2b_13b_44b_66 + 2b_23b_55b_66 + 4b_44b_55b_66), c_7 - (-b_11b_22 - b_11b_44 + b_12^2 + 2b_12b_66 - b_22b_55 - b_44b_66 - b_55b_66) ⟩ in the polynomial ring A[b_12,c_2,c_5,c_6,c_7], where A = (b_11,b_22,b_33,b_44,b_55,b_66,b_13,b_23), and compute a Gröbner basis <cit.> for it under the lexicographic order b_12 > c_2 > c_5 > c_6 > c_7. Inspection of the basis gives the equality b_12 = 1/2βc_6 + b_33/2βc_7 - α/2β, where α := - b_11b_33b_44 - 2b_11b_44b_23 - b_11b_23^2 - b_22b_33b_55 - 2b_22b_55b_13 - b_22b_13^2 - b_33b_44b_66 - b_33b_55b_66 + 4b_44b_55b_66 + 2b_44b_66b_13 + 2b_55b_66b_23 + 2b_66b_13b_23 β := (b_44b_55 + b_44b_13 + b_55b_23 + b_13b_23), showing that b_12 is determined by c_6, c_7, and the stiffnesses b_11,b_22,b_33,b_44,b_55,b_66,b_13,b_23. The proof of Theorem <ref> shows that the coefficients c_1,…,c_20 of an orthorhombic slowness surface determine the stiffnesses b_11,b_22,b_33,b_44,b_55,b_66, and that there are four possible triples for the remaining stifnesses (b_12,b_13,b_23), (b_12,b_13^*,b_23^*), (b_12^*,b_13^*,b_23), (b_12^*,b_13,b_23^*), where b_12 + b_12^* = -2b_66 b_13 + b_13^* = -2b_55 b_23 + b_23^* = -2b_44, reflecting that roots of the quadratic equations (<ref>), (<ref>), and (<ref>) must add up to minus the coefficient of the linear term. We note that the three solutions (b_12,b_13^*,b_23^*), (b_12^*,b_13^*,b_23), and (b_12^*,b_13,b_23^*) are exactly the “anomalous companions” in <cit.>. Helbig and Carcione arrive at the existence of anomalous companions by making three quite reasonable assumptions that stiffness coefficients might satisfy in order for there to exist more than one set of stiffnesses that gives rise to the same slowness surface. In other words, their conditions give sufficient conditions for the existence of anomalous companions. Our work shows that for a generic orthorhombic slowness polynomial, the anomalous companions in <cit.> are the only possible anomalous companions. §.§.§ Positivity of anomalous companions For completeness, we summarize here the analysis in <cit.> characterizing which triples of stiffnesses (<ref>) give rise to positive orthorhombic stiffness tensors. Positivity requires that the 6× 6 matrix of stiffnesses [ b_11 b_12 b_13 ; b_12 b_22 b_23 ; b_13 b_23 b_33 ; b_44 ; b_55 ; b_66 ] be positive definite. By Sylvester's criterion <cit.>, this implies that b_11 > 0, b_22 > 0, b_33 > 0, b_44 > 0, b_55 > 0, b_66 > 0, and that the 2× 2 minors b_11b_22 - b_12^2, b_11b_33 - b_13^2, b_22b_33 - b_23^2 are also positive, implying the inequalities -√(b_11b_22) < b_12 < √(b_11b_22), -√(b_11b_33) < b_13 < √(b_11b_33), -√(b_22b_33) < b_23 < √(b_22b_33). In addition, the 3× 3 leading principal minor must also be positive: b_11b_22b_33 + 2b_12b_13b_23 - b_11b_23^2 - b_22b_13^2 - b_33b_12^2 > 0. Let x := b_12/√(b_11b_22), y := b_13/√(b_11b_33), and z := b_23/√(b_22b_33) Then the conditions (<ref>) and (<ref>) become, respectively, -1 < x < 1 -1 < y < 1, -1 <z < 1 and 1 + 2xyz - x^2 - y^2 - z^2 > 0. The affine surface 1 + 2xyz - x^2 - y^2 - z^2 = 0 is the ubiquitous Cayley cubic surface! Positivity of an anomalous companion is equivalent to having the point corresponding to the companion lying inside the finite “tetrahedral” region in ^3 determined by the four singularities of the cubic surface. See Figure <ref>. § AN EXAMPLE OF UNIQUE RECONSTRUCTION IN DIMENSION 3 To perform the unique reconstruction of the stiffness tensor parameters for the slowness polynomial (<ref>), we perform a Gröbner basis calculation in magma, as follows: The output of this code is which shows not only that there is exactly one stiffness tensor that gives rise to the slowness polynomial (<ref>), but also recovers the components of this unique stiffness tensor in 0.16 seconds in a 2.3 GHz Quad-Core Intel Core I7 processor. olivinearticle author = Abramson, E. H., author = Brown, J. M., author = Slutsky, L. J., author = Zaug, J., title = The elastic constants of San Carlos olivine to 17 GPa, journal = J. Geophys. Res., volume = 102, number = B6, pages = 12253–12263, year = 1997, Ammaribook author=Ammari, H., author=Bretin, E., author=Garnier, J., author=Kang, H., author=Lee, H., author=Wahab, A., title=Mathematical methods in elasticity imaging, publisher=Princeton University Press, Princeton, NJ, date=2015, pages=viii+230, Berettaarticle author=Aspri, A., author=Beretta, E., author=Rosset, E., title=On an elastic model arising from volcanology: an analysis of the direct and inverse problem, journal=J. Differential Equations, volume=265, date=2018, pages=6400–6423, ZariskiFrameswebpage author=Barakat, M., author=Kuhmichel, T., author=Lange-Hegermann, M., title=ZariskiFrames, url=https://homalg-project.github.io/pkg/ZariskiFrames, year=2019 Barakatarticle author=Barakat, M., author=Lange-Hegermann, M., title=An algorithmic approach to Chevalley's theorem on images of rational morphisms between affine varieties, journal=Math. Comp., volume=91, date=2021, number=333, pages=451–490, Barceloarticle author=Barceló, J., author=Folch-Gabayet, M., author=Pérez-Esteva, S., author=Ruiz, A., author=Vilela, M., title=Uniqueness for inverse elastic medium problems, journal=SIAM J. Math. Anal., volume=50, date=2018, pages=3939–3962, magmaarticle author=Bosma, W., author=Cannon, J., author=Playoust, C., title=The Magma algebra system. I. The user language, note=Computational algebra and number theory (London, 1993), journal=J. Symbolic Comput., volume=24, date=1997, number=3-4, pages=235–265, Beretta2article author=Beretta, E., author=Francini, E., author=Vessella, S., title=Uniqueness and Lipschitz stability for the identification of Lamé parameters from boundary measurements, journal=Inverse Probl. Imaging, volume=8, date=2014, pages=611–644, BraamDuistermaat_1993article author = Braam, P. J., author = Duistermaat, J. J., title = Normal forms of real symmetric systems with multiplicity, journal=Indag. Math. (N.S.), volume=4, date=1993, number=4, pages=407–421, albitearticle author = Brown, J. M., author = Abramson, E. H., author = Angel, R. J., title = Triclinic elastic constants for low albite, journal = Phys. Chem. Minerals, volume = 33, number = 4, pages = 256–265, year = 2006, Buragoarticle author=Burago, D., author=Ivanov, S., title=Boundary rigidity and filling volume minimality of metrics close to a flat one, journal=Ann. of Math. (2), volume=171, date=2010, number=2, CdHKUarticle author=Caday, P., author=de Hoop, M. V., author=Katsnelson, V., author=Uhlmann, G., title=Recovery of discontinuous Lamé parameters from exterior Cauchy data, journal=Comm. Partial Differential Equations, volume=46, date=2021, number=4, pages=680–715, CHondaNakamura_2018article author = Cârstea, C. I., author = Honda, N., author = Nakamura, G., title=Uniqueness in the inverse boundary value problem for piecewise homogeneous anisotropic elasticity, journal=SIAM J. Math. Anal., volume=50, date=2018, number=3, pages=3291–3302, CNakamuraOksanen_2020article author = Cârstea, C. I., author = Nakamura, G., author = Oksanen, L., title=Uniqueness for the inverse boundary value problem of piecewise homogeneous anisotropic elasticity in the time domain, journal=Trans. Amer. Math. Soc., volume=373, date=2020, number=5, pages=3423–3443, raytheory-examplebook author=Červený, V., title=Seismic ray theory, publisher=Cambridge University Press, Cambridge, date=2001, pages=viii+713, isbn=0-521-36671-2, ColindeVerdiere_2003article author = Colin de Verdière, Y., title=The level crossing problem in semi-classical analysis. I. The symmetric case, booktitle=Proceedings of the Internat. Conf. in Honor of Frédéric Pham (Nice, 2002), journal=Ann. Inst. Fourier (Grenoble), volume=53, date=2003, number=4, pages=1023–1054, ColindeVerdiere_2004article author = Colin de Verdière, Y., title=The level crossing problem in semi-classical analysis. II. The Hermitian case, journal=Ann. Inst. Fourier (Grenoble), volume=54, date=2004, number=5, pages=1423–1441, CLObook author=Cox, D. A., author=Little, J., author=O'Shea, D., title=Ideals, varieties, and algorithms, series=Undergraduate Texts in Mathematics, edition=4, publisher=Springer, Cham, date=2015, pages=xvi+646, dHIK:layered-rigidityarticle author = de Hoop, M. V., author = Ilmavirta, J., author = Katsnelson, V., title = Spherically symmetric terrestrial planets with discontinuities are spectrally rigid, year = 2023, note = Preprint, arXiv:2302.14158 dHIL:Finsler-Dixarticle author = de Hoop, M. V., author = Ilmavirta, J., author = Lassas, M., title = Reconstruction along a geodesic from sphere data in Finsler geometry and anisotropic elasticity, year = 2021, note = Preprint, arXiv:2102.10383 dHILS:BSRarticle author = de Hoop, M. V., author = Ilmavirta, J., author = Lassas, M., author = Saksala, T., title=A foliated and reversible Finsler manifold is determined by its broken scattering relation, journal=Pure Appl. Anal., volume=3, date=2021, number=4, pages=789–811, dHILS:BDFarticle author = de Hoop, M. V., author = Ilmavirta, J., author = Lassas, M., author = Saksala, T., title = Determination of a compact Finsler manifold from its boundary distance map and an inverse problem in elasticity, journal = Comm. Anal. Geom., note = To appear., year = 2019, this-paperarticle author = de Hoop, M. V., author = Ilmavirta, J., author = Lassas, M., author = Várilly-Alvarado, A., title = Reconstruction of generic anisotropic stiffness tensors from partial data around one polarization, year = 2023, note = Preprint, arXiv (this article). See supplementary files on arXiv for magma code. dHNakamuraZhai_2019article author = de Hoop, M. V., author = Nakamura, G., author = Zhai, J., title=Unique recovery of piecewise analytic density and stiffness tensor from the elastic-wave Dirichlet-to-Neumann map, journal=SIAM J. Appl. Math., volume=79, date=2019, pages=2359–2384, dHUSZ:MRTarticle author = de Hoop, M. V., author = Uhlmann, G., author = Saksala, T., author = Zhai, J., title=Generic uniqueness and stability for the mixed ray transform, journal=Trans. Amer. Math. Soc., volume=374, date=2021, number=9, pages=6085–6144, dHUhlmannVasy_2020article author = de Hoop, M. V., author = Uhlmann, G., author = Vasy, A., title=Recovery of material parameters in transversely isotropic media, journal=Arch. Ration. Mech. Anal., volume=235, date=2020, number=1, pages=141–165, dHSaksalaZhai_2019article author=de Hoop, M. V., author=Saksala, T., author=Zhai, J., title=Mixed ray transform on simple 2-dimensional Riemannian manifolds, journal=Proc. Amer. Math. Soc., volume=147, date=2019, number=11, pages=4901–4913, Dencker_1988article author = Dencker, N., title=On the propagation of polarization in conical refraction, journal=Duke Math. J., volume=57, date=1988, number=1, pages=85–134, EskinRalston_1article author = Eskin, G., author = Ralston, J., title=On the inverse boundary value problem for linear isotropic elasticity, journal=Inverse Problems, volume=18, date=2002, number=3, pages=907–921, Greenleaf2article author=Felea, R., author=Greenleaf, A., title=An FIO calculus for marine seismic imaging: folds and cross caps, journal=Comm. Partial Differential Equations, volume=33, date=2008, number=1-3, pages=45–77, Greenleaf3article author=Felea, R., author=Greenleaf, A., title=Fourier integral operators with open umbrellas and seismic inversion for cusp caustics, journal=Math. Res. Lett., volume=17, date=2010, number=5, pages=867–886, Greenleaf1article author=Felea, R., author=Greenleaf, A., author=Pramanik, M., title=An FIO calculus for marine seismic imaging, II: Sobolev estimates, journal=Math. Ann., volume=352, date=2012, number=2, pages=293–337, GWAGbook author=Görtz, U., author=Wedhorn, T., title=Algebraic geometry I. Schemes—with examples and exercises, series=Springer Studium Mathematik—Master, publisher=Springer Spektrum, Wiesbaden, date=2020, pages=vii+625, EGAIV.3article author=Grothendieck, A., title=Éléments de géométrie algébrique. IV. Étude locale des schémas et des morphismes de schémas. III, journal=Inst. Hautes Études Sci. Publ. Math., number=28, date=1966, pages=255, label=EGAIV-3, Hartshornebook author=Hartshorne, R., title=Algebraic geometry, note=Graduate Texts in Mathematics, No. 52, publisher=Springer-Verlag, New York-Heidelberg, date=1977, pages=xvi+496, isbn=0-387-90244-9, HC2009article author=Helbig, K., author=Carcione, J., title=Anomalous polarization in anisotropic media, journal=Eur. J. Mech. A/Solids., volume=28, date=2009, pages=704–711 Hormandervol1book author=Hörmander, L., title=The analysis of linear partial differential operators. I, series=Grundlehren der mathematischen Wissenschaften, volume=256, edition=2, publisher=Springer-Verlag, Berlin, date=1990, pages=xii+440, isbn=3-540-52345-6, I:degeneratearticle author = Ilmavirta, J., title = Every slowness surface is singular, note = In preparation. IM:Finsler-XRTarticle author = Ilmavirta, J., author = Mönkkönen, K., title = The geodesic ray transform on spherically symmetric reversible Finsler manifolds, year = 2022, note = Preprint, arXiv:2203.16886 IUY_1article author = Imanuvilov, O., author = Uhlmann, G., author = Yamamoto, M., title = On Reconstruction of Lamé Parameters from Partial Cauchy Data in Three Dimensions, journal = Inverse Problems, volume = 28, pages = 125002, year = 2012 Liubook author=Liu, Qing, title=Algebraic geometry and arithmetic curves, series=Oxford Graduate Texts in Mathematics, volume=6, publisher=Oxford University Press, date=2002, pages=xvi+576, Mazzucato1article author=Mazzucato, A., author=Rachele, L., title=Partial uniqueness and obstruction to uniqueness in inverse problems for anisotropic elastic media, journal=J. Elasticity, volume=83, date=2006, number=3, pages=205–245, RacheleMazzucato_2007article title = On uniqueness in the inverse problem for transversely isotropic elastic media with a disjoint wave mode, journal = Wave Motion, volume = 44, number = 7, pages = 605–625, year = 2007, issn = 0165-2125, author = Mazzucato, A., author = Rachele, L., Mazzucato2article author=Mazzucato, A., author=Rachele, L., title=On transversely isotropic elastic media with ellipsoidal slowness surfaces, journal=Math. Mech. Solids, volume=13, date=2008, number=7, pages=611–638, MelroseUhlmann_1979article author = Melrose, R., author = Uhlmann, G., title=Lagrangian intersection and the Cauchy problem, journal=Comm. Pure Appl. Math., volume=32, date=1979, number=4, pages=483–519, Meyerbook author=Meyer, C., title=Matrix analysis and applied linear algebra, publisher=SIAM, Philadelphia, PA, date=2000, pages=xii+718, isbn=0-89871-454-0, Michelarticle author=Michel, R., title=Sur la rigidité imposée par la longueur des géodésiques, language=French, journal=Invent. Math., volume=65, date=1981/82, number=1, pages=71–83, review=636880, doi=10.1007/BF01389295, NakamuraTUhlmann_1999article author = Nakamura, G., author = Tanuma, K., author = Uhlmann, G., title=Layer stripping for a transversely isotropic elastic medium, journal=SIAM J. Appl. Math., volume=59, date=1999, number=5, pages=1879–1891, NakamuraUhlmann_1article author = Nakamura, G., author = Uhlmann, G., title=Global uniqueness for an inverse boundary problem arising in elasticity, journal=Invent. Math., volume=118, date=1994, number=3, pages=457–474, PSUbook author=Paternain, G., author=Salo, M., author=Uhlmann, G., title=Geometric inverse problems—with emphasis on two dimensions, volume=204, publisher=Cambridge University Press, date=2023, pages=xxiv+344, SacksYakhno_1998article title=The inverse problem for a layered anisotropic half space, journal=J. Math. Anal. Appl., volume=228, date=1998, number=2, pages=377–398, author = Sacks, P. E., author = Yakhno, V. G. stacks-projectwebpage author=The Stacks project authors, title=The Stacks project, date=2020, url=https://stacks.math.columbia.edu, label=S20 SUarticle author=Stefanov, P., author=Uhlmann, G., title=Boundary rigidity and stability for generic simple metrics, journal=J. Amer. Math. Soc., volume=18, date=2005, number=4, pages=975–1003, SUV1article author=Stefanov, P., author=Uhlmann, G., author=Vasy, A., title=Local and global boundary rigidity and the geodesic X-ray transform in the normal gauge, journal=Ann. of Math. (2), volume=194, date=2021, number=1, pages=1–95, SUV2article author=Stefanov, P., author=Uhlmann, G., author=Vasy, A., title=Boundary rigidity with partial data, journal=J. Amer. Math. Soc., volume=29, date=2016, number=2, pages=299–332, SUVarticle author=Stefanov, P., author=Uhlmann, G., author=Vasy, A., title=Local recovery of the compressional and shear speeds from the hyperbolic DN map, journal=Inverse Problems, volume=34, date=2018, number=1, pages=014003, 13, SUVIIarticle author=Stefanov, P., author=Uhlmann, G., author=Vasy, A., title=The transmission problem in linear isotropic elasticity, journal=Pure Appl. Anal., volume=3, date=2021, number=1, pages=109–161, Uhlmann_1982article author = Uhlmann, G., title=Light intensity distribution in conical refraction, journal=Comm. Pure Appl. Math., volume=35, date=1982, pages=69–80, UVarticle author=Uhlmann, G., author=Vasy, A., title=The inverse problem for the local geodesic ray transform, journal=Invent. Math., volume=205, date=2016, number=1, pages=83–120, Vakilwebpage author=Vakil, R., title=The Rising Sea: Foundations of Algebraic Geometry, url=http://math.stanford.edu/ vakil/216blog/FOAGnov1817public.pdf, year=2018 Heijdenthesis author=van der Heijden, J., title=Propagation of transient elastic waves in stratified anisotropic media, note = PhD thesis, Technische Universiteit Delft, year = 1987 Yedlingarticle title = The wave front in a homogeneous anisotropic medium, author = Yedlin, M. J., year = 1980, journal = Bull. Seismol. Soc. Am., volume = 70, number = 6, pages= 2097–2102, Wiechertarticle title = Über Erdbebenwellen, author = Wiechert, E., author = Zoeppritz, K., year = 1907, journal = Nachr. Koenigl. Geselschaft Wiss. Göttingen, volume = 4, pages= 415–-549, Zou_2021article title = Microlocal Methods for The Elastic Travel Time Tomography Problem for Transversely Isotropic Media, author = Zou, Y., year = 2021, journal = Nachr. Koenigl. Geselschaft Wiss. Göttingen, volume = 4, pages= 415–549,
http://arxiv.org/abs/2307.01595v1
20230704093503
Prompt Tuning Pushes Farther, Contrastive Learning Pulls Closer: A Two-Stage Approach to Mitigate Social Biases
[ "Yingji Li", "Mengnan Du", "Xin Wang", "Ying Wang" ]
cs.CL
[ "cs.CL", "cs.AI" ]
Bridge the Performance Gap in Peak-hour Series Forecasting: The Seq2Peak Framework Yuantao Gu August 1, 2023 ================================================================================== As the representation capability of Pre-trained Language Models (PLMs) improve, there is growing concern that they will inherit social biases from unprocessed corpora. Most previous debiasing techniques used Counterfactual Data Augmentation (CDA) to balance the training corpus. However, CDA slightly modifies the original corpus, limiting the representation distance between different demographic groups to a narrow range. As a result, the debiasing model easily fits the differences between counterfactual pairs, which affects its debiasing performance with limited text resources. In this paper, we propose an adversarial training-inspired two-stage debiasing model using Contrastive learning with Continuous Prompt Augmentation (named CCPA) to mitigate social biases in PLMs' encoding. In the first stage, we propose a data augmentation method based on continuous prompt tuning to push farther the representation distance between sample pairs along different demographic groups. In the second stage, we utilize contrastive learning to pull closer the representation distance between the augmented sample pairs and then fine-tune PLMs' parameters to get debiased encoding. Our approach guides the model to achieve stronger debiasing performance by adding difficulty to the training process. Extensive experiments show that CCPA outperforms baselines in terms of debiasing performance. Meanwhile, experimental results on the GLUE benchmark show that CCPA retains the language modeling capability of PLMs. § INTRODUCTION Pre-trained Language Models (PLMs) have demonstrated outstanding performance in recent years and have been widely used in natural language understanding tasks <cit.>. However, the powerful language modeling capability enables PLMs to learn good representations from large-scale training corpora while capturing human-like social biases. Recent studies have demonstrated that the representations encoded by PLMs learn social biases specific to demographic groups (e.g., gender, race, religion) and can be amplified to downstream tasks, leading to unfair outcomes and adverse social effects <cit.>. As a result, mitigating social biases in PLMs' encoding can improve the fairness of NLP systems significantly <cit.>. Most existing debiasing techniques first need to construct sample pairs using Counterfactual Data Augmentation (CDA) <cit.> to balance the training corpora. The general approach of CDA is to replace the original corpus with attribute words (e.g., he/she, man/woman) specific to different demographic groups. For example, RCDA <cit.> uses a generator to generate a large number of antisense sentences and then uses a discriminator to evaluate the quality of the original and antisense samples jointly. FairFil <cit.> obtains a pair of positive sample sentences by replacing the attribute words in the training corpora with the antonyms and then uses contrastive learning to train a filter for debiasing. Auto-Debias <cit.> uses pairs of attribute words as training corpora, amplifies the bias between sample pairs by searching biased prompt texts in the Wikipedia vocabulary, and then performs semantic alignment using Jensen-Shannon divergence. These methods aim to mitigate social biases between different demographic groups by narrowing the representation distance between sample pairs. However, CDA slightly modifies the original corpus, limiting the representation distance between different demographic groups to a narrow range. As a result, the debiasing model is easy to overfit the difference between counterfactual pairs, which affects its learning ability with limited text resources. As shown in Figure <ref>, it is difficult for PLMs to achieve the ideal debiasing performance for newly input samples with greater difficulty. In this work, we propose a two-stage debiasing method using Contrastive learning with Continuous Prompt Augmentation (named CCPA) to mitigate social biases in PLMs' encoding. Inspired by adversarial training, our approach improves the debiasing ability of PLMs by first amplifying and then attenuating the bias between different demographic groups. Specifically, we first use CDA to replace attribute words in the original training corpus to construct counterfactual pairs corresponding to different demographic groups. In the first stage, we augment the positive sample pairs with continuous prompt tuning to increase the distance between them to amplify the biases between different demographic groups. In the second stage, we utilize contrastive learning to pull the distance between the positive sample pairs to attenuate the biases between different demographic groups. CCPA increases the difficulty of model fitting by expanding the representation space between sample pairs. We believe that difficult learning experiences make the model more powerful, thus improving the debiasing ability of PLMs training in corpora with limited resources. Our main contributions are as follows: * We propose the CCPA debiasing framework that combines prompt tuning and contrastive learning to learn a debiased PLM representation. The PLM's parameters are fixed in the first stage, and a generator encoding continuous prompts is trained. In the second stage, the prompts are fixed, and the PLM's parameters are fine-tuned using contrastive learning. * We propose data augmentation using continuous prompts to achieve excellent debiasing performance using small training data rather than relying on a large external corpus. Given that continuous prompts may cause the representation distance between sample pairs to be too far apart, causing the semantic space to degrade, we propose constraining the prompt tuning using the Mahalanobis Distance to keep the semantic space as stable as possible. * We train CCPA on several real-world corpora and mitigate bias on the most common gender bias. The results on BERT and DistilBERT show that CCPA is superior to state-of-the-art models. In addition, we test the downstream tasks on the GLUE benchmark, and show that CCPA retains the language modeling capability while improving the PLMs' fairness. § METHODOLOGY In this section, we propose the Contrastive learning with Continuous Prompt Augmentation (CCPA) framework to mitigate the social bias in the encoding of PLMs specific to the most common gender bias. Our proposed CCPA consists of two stages: 1) Continuous Prompt Tuning and 2) Fine-Tuning with Contrastive Learning. The framework of CCPA is shown in Figure <ref>. §.§ Pre-Processing based on CDA First, we pre-process the training corpus with imbalanced samples using Counterfactual Data Augmentation (CDA). Given a list of attribute words specific to gender bias,[We only consider the binary gender direction and use the same list of gender-specific attribute words as <cit.>.] for each attribute word (e.g., male/female), we match sentences containing an attribute word in the training corpus. The attribute word is then replaced with the opposite word in a different gender direction (e.g., male is replaced by female), leaving the other words unchanged. Then, we get the pre-processed training corpus 𝒮={(s_1,s_1'),(s_2,s_2'),⋯,(s_N,s_N')} consists of N counterfactual pairs (s_i,s_i') along different gender directions. §.§ Continuous Prompt Tuning Prompt-based learning is similar to giving instructions to the model task to guide the model learning knowledge more directly <cit.>. A lot of work utilize manually constructed prompts <cit.> or automatically searched discrete prompts <cit.> to assist language models. However, manually constructed templates are heavily based on the designers' experience and automatically searched prompts are limited by the search space <cit.>. Instead of limiting the prompts to human interpretable natural language, the continuous prompts <cit.> guide directly within the embedding space of the model. Meanwhile, continuous prompts tune their parameters, removing the constraint of templates being parameterized by PLMs' parameters. Inspired by adversarial training, we believe that increasing the difficulty of the training process can guide the model in acquiring a stronger learning ability. To achieve this goal, we propose a data augmentation method based on continuous prompt tuning to further push the differences between counterfactual pairs. Data augmentation method based on continuous prompt tuning adds difficult information to the model by concatenating embeddings that amplify bias across different demographic groups over counterfactual pairs. Given a template T={[p_1],[p_2],⋯,[p_m], s}, where s denotes a sentence, [p_j] is a virtual token represented as [PROMPT] and m virtual tokens form a prompt sequence 𝒫. For each counterfactual pair (s_i,s_i')∈𝒮 obtained by data pre-processing, we concatenate the same prompt sequence 𝒫 at the head of each sentence (see Figure <ref>). The augmented sample pair is denoted by (ŝ_̂î,ŝ_̂î') and is fed into a PLM to obtain the sentence representation. Formally, let M denote a PLM whose encoder E(·) encodes an input sentence ŝ_̂î and outputs a sentence embedding 𝐳_i=E(ŝ_̂î). Similarly, 𝐳'_i=E(ŝ_̂î'). In order to obtain continuous prompt embeddings, we train a generator G(·) to encode the prompt sequence 𝒫. Following P-Tuning <cit.>, we choose a bidirectional long-short-term memory network (LSTM), which consists of a two-layer multilayer perceptron (MLP) and a ReLU activation layer. The embedding 𝐡_j of each virtual token [p_j] in the prompts sequence is encoded by G(·) as follows: 𝐡_j =G([𝐡_j:𝐡_j]) =G([LSTM(𝐡_1:j):LSTM(𝐡_j:m+1)]). Afterwards, we replace the continuous prompt embeddings {𝐡_1,𝐡_2,⋯,𝐡_m} to the corresponding positions of the sentence embeddings 𝐳_i to obtain the sentence representations pairs (𝐳_i,𝐳_i'). In this stage, our training objective is to push away the distance of representation (𝐳_i,𝐳_i') between sample pairs (ŝ_̂î,ŝ_̂î'). Briefly, we take the Cosine Similarity between sentence representations as the loss function, defined as follows: ℒ_cos=𝐳·𝐳'/𝐳𝐳' =∑_i=1^n𝐳_i·𝐳_i'/√(∑_i=1^n𝐳_i^2)√(∑_i=1^n𝐳_i'^2), where 𝐳 and 𝐳' denote sentence representations with different sensitive attributes within a batch of size n, respectively. The representation distance between the sample pairs is enlarged with the gradient of similarity decreasing, thus amplifying the bias information between different genders. Considering that the sentence representation with high-dimensional linear distribution is not independently and equally distributed among the dimensions, only relying on Euclidean distance training may cause the sentence representation to deviate from the original distribution and thus destroy the semantic information. To constrain the change of sentence representation within the original distribution, Mahalanobis distance is taken as the regularization term of the loss function: ℒ_mahal=√((𝐳-𝐒)^⊤Σ^-1(𝐳-𝐒)), where 𝐳 is the representation of a batch size of samples with concatenated prompt embeddings, 𝐒 is the representation of the entire pre-processed training samples without concatenated prompt embeddings, and Σ is the covariance matrix of 𝐒. Mahalanobis distance is a correction of the Euclidean distance, which corrects the assumption that the Euclidean distance is independent and equally distributed among all dimensions. With the constraint of Mahalanobis distance, the augmented samples of each batch can vary within the distribution range of the original training data to maintain the semantics. The overall loss function of the continuous prompt tuning stage is defined as: ℒ_PT=ℒ_cos+α×ℒ_mahal, where α is a hyperparameter that adjusts the weight of ℒ_mahal. In the gradient descent process of ℒ_PT, we only adjust the parameters of the generator G(·) and fix the PLMs' parameters to obtain the continuous prompt embeddings that further amplifies the bias between different sensitive attributes. §.§ Fine-Tuning with Contrastive Learning We then use contrastive learning to mitigate the social bias in PLMs' encoding for different demographic groups. Contrastive learning <cit.> is a task-agnostic self-supervision method that learns data features by minimizing contrastive loss to maximize the similarity of the representation vectors of positive sample pairs <cit.>. Specifically, we encourage as much consistency as possible among representations of different sensitive attributes by maximizing the similarity of the augmented counterfactual pairs. Noise Contrast Estimation <cit.> is usually used as a contrastive loss function, given an augmented sample pair of a batch {(ŝ_i,ŝ_i')}_i=1^n, which is defined as follows: ℒ_nce=1/n∑_i=1^nloge^sim(𝐳_i,𝐳_i')/ τ/1/n∑_j=1^n e^sim(𝐳_i,𝐳_j)/ τ, where (𝐳_i,𝐳_i')=(E(ŝ_i),E(ŝ_i')), τ is a temperature hyperparameter and sim(·,·) denotes the similarity function usually using cosine similarity. During training, we only fine-tune the PLMs' parameters and fix the embedding of continuous prompts. By maximizing ℒ_nce, differences in the encoding of PLM outputs specific to different demographic groups are eliminated, resulting in representations independent of sensitive attributes. Considering that the attenuation of biases towards encoding may affect PLMs' language modeling capability, we add a Masking Language Modeling (MLM) loss during the fine-tuning stage to aid PLM training <cit.>. Following previous work <cit.>, we randomly mask tokens in training texts with a 15% probability.[In practice, the chosen masked token has an 80% chance of being masked, a 10% chance of being replaced with another word, and a 10% chance of remaining unchanged.] Our objective is to train the encoder to predict the masked tokens through contextual semantics, thereby preserving the language modeling capability of PLMs. The overall loss function in the fine-tuning stage is defined as follows: ℒ_FT=ℒ_nce+β×ℒ_mlm, where β is a hyperparameter that controls the weight of ℒ_mlm. Our overall algorithm is given in Algorithm <ref>. § EXPERIMENTS In this section, we conduct experiments to evaluate the performance of CCPA, in order to answer the following three research questions. Q1. How effective is CCPA in mitigating social biases in PLMs' encoding? Q2. How does each component affect CCPA? Q3. Will CCPA preserve the language modeling capability of PLMs? §.§ Experimental Setup §.§.§ Attribute Word List & Datasets Following <cit.>, our gender attribute word list is set to: {MALE, FEMALE}={(man, woman), (boy, girl), (he, she), (father, mother), (son, daughter), (guy, gal), (male, female), (his, her), (himself, herself), (John, Mary)}. Following <cit.>, we select five real-world datasets as the initial training corpus, which are Stanford Sentiment Treebank <cit.>, POM <cit.>, WikiText-2 <cit.>, Reddit <cit.> and MELD <cit.> respectively. We set the maximum sentence length to 100, and the pre-processed training corpus contained 10,510 sentences. §.§.§ Baselines & Implementation Details We select seven recent task-agnostic debiasing models as baselines. CDA <cit.>, Dropout <cit.>, Sent-Debias <cit.>, FairFil <cit.>, INLP <cit.> and MABEL <cit.> apply counterfactual data augmentation to sentence-level debiasing, where FairFil and MABEL adopt the contrastive learning framework training model. Auto-Debias <cit.> directly uses the attribute word list and the stereotype words list as the training corpus. We perform the main experiments on BERT <cit.> and compare CCPA to all baseline models. We also test debiasing performance on DistilBERT <cit.> and ELEATRA <cit.>. All checkpoints use bert-base-uncased, distilbert-base-uncased, and google/electra-base-generator implemented by Huggingface Transformers library <cit.>. In the continuous prompt tuning stage, the learning rate is set to 1e^-5, the batch size is set to 64 and α=0.005. Following P-Tuning <cit.>, the virtual tokens template of continuous prompts is denoted as a triplet with the length of each element selected on {1,2,3}. In the fine-tuning stage, the learning rate is set to 1e^-4. The batch size is set to 32, β=1 and τ=1. We report the average of the results of three runs over 20 epochs. To compare the baseline models more fairly, we apply the same attribute word lists and training datasets to CDA and Dropout as CCPA. The implementation codes for CDA, Dropout, Sent-Debias, and INLP are provided by <cit.>, and the implementation codes for FairFil and Auto-Debias are provided by the authors. For MABEL, we report the results from its original paper. §.§ Evaluation Metrics We measure debiasing performance using the common three internal bias evaluation metrics and two external bias evaluation metrics. §.§.§ Internal Bias Evaluation Metrics Sentence Encoder Association Test (SEAT) <cit.> uses sentence templates to evaluate the association between different sensitive attribute demographic and target concepts. Given the attribute word lists 𝒜 and ℬ, the target words lists 𝒳,𝒴. The results are presented by effect size, defined as: d=μ({s(x,𝒜,ℬ)})-μ({s(y,𝒜,ℬ)})/σ({s(t,𝒳,𝒴)}_t∈𝒜∪ℬ), where x∈𝒳 and y∈𝒴, μ(·) is the mean function and σ(·) is the standard deviation. And s(w,𝒜,ℬ) is the bias degree defined as: s(w,𝒜,ℬ)=μ(cos(w,a))-μ(cos(w,b)). The gender-specific subsets of SEAT are 6, 6b, 7, 7b, 8, and 8b. We report the effect size of debiasing models on each subset and the average value of the absolute value of the six subsets, respectively. StereoSet <cit.> uses the fill-in-the-blank template to investigate the stereotype association of PLM. The Language Modeling Score (LM) is the percentage of stereotype or anti-stereotype words selected by the model based on incomplete contextual sentences. The Stereotype Score (SS) is the percentage of models that choose stereotypes over anti-stereotypes. The Idealized Context Association Test (ICAT) is a comprehensive evaluation index of LM and SS. Crowdsourced Stereotype Pairs (CrowS-Pairs) <cit.> is a dataset containing pairs of stereotype sentences and anti-stereotype sentences. We report the ratio of mask token probabilities assigned to stereotype sentences rather than anti-stereotype sentences, denoted using CrowS. §.§.§ External Bias Evaluation Metrics Bias-in-Bios <cit.> is a biography dataset in which each sample is labeled with gender (male or female) and occupation (28 categories). We fine-tune the debiased model on the training set with the goal of predicting occupations. Overall Accuracy result is used to measure task precision, and individual Accuracy results for male and female are used to measure gender fairness. Furthermore, we report the gap between the true positive rates of the male prediction results and the female prediction results denotes as GAP_TPR, as well as the root mean square of the true positive rates difference for each category denotes as GAP_RMS. The closer their score is to 0, the better. They are defined as follows: GAP_TPR = |TPR_M - TPR_F|, GAP_RMS = √(1/|C|∑_y∈ C (GAP_TPR,y)^2). Bias-NLI <cit.> fills gender words and occupation words with stereotypes into sentence templates to form sentence pairs, and the training goal is to inference whether the sentence pair is neutral or not. It defines three metrics to reflect the fairness of the model: 1) Net Neutral (NN), the average probability of neutral labels across all sentence pairs; 2) Fraction Neutral (FN), the proportion of sentence pairs marked as neutral; 3) Threshold:τ (T:τ), The fraction of samples with neutral probability above τ is reported. §.§ Debiasing Performance Analysis §.§.§ Internal Debiasing Results Table <ref> shows the experimental results of three bias evaluation metrics for CCPA and baseline models on BERT, DistilBERT, and ELEATRA. We also report results for biased BERT, DistilBERT, and ELEATRA as references. The results show that CCPA achieves a better balance between PLMs' fairness and language modeling capability than the baseline models. For BERT, CCPA reduces the average effect size from 0.621 to 0.249, increases ICAT from 66.86 to 73.28, and reduces CrowS from 57.86 to 51.57. Our method has achieved optimal results in the three test subsets of SEAT 6, 7, 8b and the average effect size, and has also been greatly improved in the other test subsets. The results on StereoSet show that CCPA does not weaken BERT's language modeling ability but slightly improves it. Although LM and SS do not achieve optimal results, our comprehensive index ICAT is better than other models. Both FairFil and MABEL are biased by contrastive learning, but their overall performance is not ideal. Although FairFil is outstanding in terms of SS performance, it seriously damages BERT's language modeling ability, possibly because it only considers sentence-level representation and does not retain token-level encoding ability. MABEL achieves promising results on StereoSet and CrowS-Pairs, but its SEAT results must be improved. Regarding overall performance, CCPA outperforms other contrastive learning frameworks, demonstrating that our adversarial training inspired approach can improve the model's learning ability by increasing the complex information in the model. For DistilBERT, CCPA decreases the average effect size from 0.883 to 0.152 and improves ICAT from 66.93 to 71.30. Our model gets excellent experimental results on most test subsets of SEAT and reaches an almost ideal 50.31% result on CrowS-Pairs. LM score decreases, and we analyze that the semantic information of the original representation is affected by too much debiasing. For ELEATRA, which does not belong to the bert-series PLM, the debiasing effect of CCPA is equally significant, and the experimental results are fairer than the original ELEATRA on all three intrinsic metrics. In detail, CCPA reduced the average effect size from 0.797 to 0.421, increases ICAT by 8.37% without significantly decreasing LM score, and reduces CrowS score by 1.89%. We also perform a small qualitative study by visualizing t-SNE plots of sentence embedding. As can be seen from Figure <ref>, in BERT, male attribute words are more inclined to target words in the technical field (such as career or science) in the embedded space, while female attribute words are more inclined to target words in the humanities (such as family or poetry). After using CCPA to debias, it is observed that gender-attribute words are pulled closer together and away from neutral words in the representational space. §.§.§ External Debiasing Results We fine-tune the debiased BERT on two downstream tasks Bias-in-Bios and Bias-NLI to verify the effect of CCPA on external debiasing, and the results are shown in Tables <ref> and <ref>. All our experimental setups are consistent with MABEL, and all the results reported in the table for the baseline models are from MABEL. On the Bias-in-Bios task as shown in Table <ref>, CCPA not only achieves the optimal results on task accuracy, but also performs the best on all gender fairness metrics except GAP_RMS. Although INLP obtains the best score on the GAP_RMS metric, its task accuracy is clearly impaired from the reported results. Compared to all baselines, CCPA achieves the best overall debiasing performance while preserving the model's prediction performance on downstream tasks. On the Bias-NLI task as shown in Table <ref>, CCPA achieves sub-optimal results on all the metrics. It is worth stating that MABEL is a debiasing method trained on the NLI task, which we analyze as the main reason for its most outstanding performance. Even so, the strong debiasing effect shown by CCPA on task Bias-NLI is heartening. The results of the internal debiasing experiment and the external debiasing experiment show that our proposed CCPA has outstanding performance in mitigating gender bias in PLMs' encoding. CCPA has an efficient debiasing performance, which answers the first question (Q1) proposed at the beginning of this section. §.§ Ablation Analysis We conduct ablation experiments on BERT to investigate how each component affects CCPA performance. The results are shown in Table <ref>. T_(1, 1, 1) indicates that the continuous prompt template is a triplet with one virtual token for each element, i.e., the length of prompts is 3. By analogy, T_(2, 2, 2) and T_(3, 3, 3) represent prompt templates of lengths 6 and 9. The purpose of this setting is to make it easier to observe the effect of the prompts' length on the model. In the experimental group of each template, we compare three versions of CCPA: the original CCPA, the version without ℒ_mlm represented as CCPA^- and the version without ℒ_mahal represented as CCPA^*. In addition, we have experimented with both CCPA without prompts and CCPA without prompts and ℒ_mlm. It is observed from the experimental results that the debiasing ability of CCPA increases with the rise of the template's length. This indicates that longer continuous prompt embeddings bring more difficult information to the model, thus increasing the debiasing effort. However, more extended templates can cause the original sentence semantics to be broken and thus weaken PLM's language modeling capability. In each experimental group, both CCPA^- and CCPA^* show a decrease in the results of the three evaluation metrics compared to CCPA. This phenomenon verifies that both MLM-assisted loss and Mahalanobis distance constraint benefit CCPA. Overall, MLM has a greater influence, especially on SS and CrowS, which may be because random mask tokens train encoders to retain token-level semantic information. In addition, the results of NO_prompt verify that continuous prompts play an essential role in CCPA. NO_prompt+mask tests the effect of fine-tuning PLMs based solely on contrastive learning. Unsurprisingly, the performance on all indexes could be better. The results of NO_prompt and NO_prompt+mask again reflect our method's effectiveness. The ablation studies answer our second question (Q2) by exploring the role played by each component of the CCPA. §.§ Language Modeling Capability Analysis We perform experiments on nine natural language understanding tasks of the GLUE benchmark to verify the language modeling capability of CCPA on downstream tasks. In task-specific fine-tuning, we set the learning rate to 2e-5 and the batch size to 32 for all models. As in Table <ref>, CCPA's performance in 9 tasks is comparable to that of the original BERT, and the average results are almost equivalent to BERT's. CCPA also shows similar performance on DistilBERT, indicating that our model is effective on other models besides BERT. Combined with the LM score in Table <ref>, the experiment shows that CCPA can debias without damaging the language modeling capability of PLMs, thus answering the third research question (Q3). § RELATED WORK We divide debiasing methods into two categories based on the debiasing strategy: task-specific methods and task-agnostic methods. §.§ Task-Specific Methods Task-specific methods adopt the strategy of debiasing in the fine-tuning stage of the downstream task, of which the downstream task is known <cit.>. One representative work is INLP <cit.>, which repeatedly trains a linear classifier that predicts the target concept, and then projects the representation into the null space of the classifier's weight matrix to remove the representation bias. Contrastive learning is proposed to mitigate bias in classifier training <cit.>. It encourages instances sharing the same class labels to have similar representations while ensuring that protected attributes have different distributions. These methods use attribute words to label training data without CDA. However, they are biased towards specific downstream tasks and cannot be applied to other tasks in general. When training data change, task-specific methods are difficult to transfer to new tasks. §.§ Task-Agnostic Methods Task-agnostic methods adopt the strategy of debiasing representation or processing unbalanced data before the downstream task, and they can be applied to any downstream task <cit.>. Most of these methods apply counterfactual data augmentation to augment the unbalanced corpus and then debias the augmented text information. Counterfactual data augmentation <cit.> is a general approach to augment corpora through causal intervention and has since been widely used to mitigate social biases. Different variants of counterfactual data augmentation have been proposed, such as Sent-Debias <cit.>, FairFil <cit.>, MABEL <cit.>, to name a few examples. Task-agnostic methods primarily use the CDA to balance the training corpus by constructing counterfactual pairs specific to different demographic groups. However, simply applying CDA to the original corpus makes minor changes, constraining the representation space to a narrow range. This makes the model easily fit the differences between counterfactual pairs, weakening the debiasing ability. Unlike existing CDA methods, we train a generator that encodes continuous prompts before fine-tuning PLM. The goal is to widen the representation distance between different groups to increase the difficulty of the model-learning process. § CONCLUSIONS Inspired by adversarial training, we propose CCPA, a two-stage debiasing model that combines contrastive learning with continuous prompts. In the continuous prompt tuning stage, we train a generator encoding continuous prompt embeddings to increase the representative distance between counterfactual pairs. In the fine-tuning stage, we use contrastive learning to reduce the representation distance between the augmented sample pairs. By increasing the difficulty of the training process, CCPA enables PLMs to learn a stronger debiasing ability. Extensive experiments on BERT and DistilBERT show that CCPA effectively reduces social bias in PLM representation while retaining language modeling capability. § LIMITATIONS In this work, we focus on debiasing the gender bias for PLMs. In the future, we will try to mitigate social biases other than gender, such as race and religion. In addition, we also plan to extend our debiasing method to more language models, such as Natural Language Generation (NLG) models. § ETHICS STATEMENT This paper has been thoroughly reviewed for ethical considerations and has been found to be in compliance with all relevant ethical guidelines. The paper does not raise any ethical concerns and is a valuable contribution to the field. § ACKNOWLEDGMENTS We express gratitude to the anonymous reviewers for their hard work and kind comments. The work was supported in part by the National Natural Science Foundation of China (No.62272191, No.61976102), the Science and Technology Development Program of Jilin Province (No.20220201153GX), the Interdisciplinary and Integrated Innovation of JLU (No.JLUXKJC2020207), and the Graduate Innovation Fund of Jilin University (No.2022214). acl_natbib
http://arxiv.org/abs/2307.01165v1
20230703171749
Multifractal and recurrence measures from meteorological data of climate zones in India
[ "Joshin John Bejoy", "Jayesh Dave", "G. Ambika" ]
physics.ao-ph
[ "physics.ao-ph", "nlin.CD", "physics.data-an" ]
AIP/123-QED Indian Institute of Science Education and Research (IISER) Tirupati, India Indian Institute of Science Education and Research Thiruvananthapuram, India We present a study on the pattern underlying the climate dynamics in various locations spread over India, including the Himalayan region, coastal region, central and northeastern parts of India. We try to capture the variations in the complexity of their dynamics derived from temperature and relative humidity data, which can lead to the characterization of the differences in climate dynamics over the last seventy years. Based on the computed measures of their multifractal spectra, we could group the locations into six clusters, each having similar underlying dynamics in their temperature and relative humidity variations. We also report the variations in climate dynamics over time in these locations by estimating the recurrence-based measures using a sliding window analysis on the data sets. They provide evidences of changes in climate dynamics related to global warming and how their variations differ among the different locations of the country. The major changes in dynamics can be related to the consequences of the global climate regime shifts reported in the mid-1970s, late 1980s, and late 1990s. The study can thus contribute to our understanding of the complexity of the dynamics of the climate system over the Indian subcontinent and its variations and shifts over time. Multifractal and recurrence measures from meteorological data of climate zones in India G. Ambika 27 June 2023 ======================================================================================== The climate system is known to be complex, with the coexistence of many nonlinear interactions among its subsystems and several dynamical processes that change over spatial and temporal scales. This makes it difficult to model climate dynamics effectively. Hence climate variability is studied using nonlinear techniques on meteorological data like temperature, rainfall, and relative humidity to estimate measures of its complexity. Among them, multifractal measures derived from nonlinear time series analysis and recurrence-based measures can be very effective and powerful. India is one of the countries that undergo significant changes in climate but is not well studied for a spatial spread of these changes or its long-term temporal variations. In this study, we reconstruct the geometry underlying the climate dynamics of stations spread over the climate zones of India from the temperature and relative humidity data. We report how the multifractal measures help to understand the variations in complexity of the dynamics in these locations. We could group them into clusters based on complexity measures derived from their temperature and relative humidity variations. We also compute measures from recurrence plots and recurrence networks using sliding window analysis over the data to understand their variations in time in each location. Together, these two approaches can lead to an understanding of the spatio-temporal variability, especially related to urbanization, industrialization, and global warming over the country. § INTRODUCTION The climate is a complex nonlinear and heterogeneous dynamical system that exhibits complex variability over many scales in time and space. The variations in climate and global warming are of great concern as they affect humanity in many ways through reduction in agricultural yield, decline in water supplies, floods, erosion of coastal areas, droughts, changes in rainfall pattern, decrease in biodiversity, etc. Hence research related to climate and its variations are highly relevant for planning and policy-making for the benefit of humanity. We note that many interdisciplinary studies have been reported in this context in recent years <cit.>. The major challenge in the study of climate variability arises due to the non-availability of dynamical equations describing the underlying processes. Hence most of the studies rely on data of temperature, relative humidity, rainfall, percolation etc., that require spatial details and temporal coverage. Several techniques are used in understanding climate systems, and the most recent among them is the use of nonlinear analysis applied to observational or measured data to estimate multifractal measures<cit.>. In this analysis, two methods are commonly used: Multifractal Detrended Fluctuation Analysis (MFDFA)<cit.>, and f(α) or multifractal spectrum using the Grassberger-Procaccia(GP) algorithm <cit.>. In our study, we use the latter since it is based on the geometric features of the underlying dynamics. This is effective and suitable in the context of climate as it helps to recreate the dynamics of the system from data and then study the complexity of the underlying dynamics using nonlinear and fractal measures. These techniques are being used effectively for studies of various real world systems using their observational data or average responses like data from stars<cit.>, EEG<cit.>, ECG<cit.>, combustion data<cit.>, atmospheric data<cit.> and financial data<cit.>, etc. In the context of climate, the MFDFA method was applied to study climate impacts and breakpoints in climate data and to understand how non-linearity and multifractality occur in temperature data<cit.> and wind data<cit.>. In the context of the Indian climate, studies are reported along similar directions using rainfall and temperature data series <cit.>. However, a detailed study based on the nonlinear nature and consequent multifractality of climate variations over various locations in India is highly relevant. Due to the difference in geographical locations and also the difference in factors affecting their climate, like urbanization, industrialization, and global warming, we expect them to have variations in the dynamics of their climate. We can capture these variations and their complexity using nonlinear measures like multifractal spectra and recurrence-based measures. In the present study, we try to understand the patterns and connections in climate dynamics over the Indian subcontinent using data of temperature and relative humidity from 15 locations across India and study how they change over the period 1948 to 2020. Using the framework of nonlinear time series analysis, we recreate the dynamics underlying the climate system from climate data and compute measures from the multifractal analysis that can characterize the geometric complexity of the underlying dynamics. As such, the study is relevant for understanding the nonlinear nature of the climate dynamics in the Indian context and the variations in the complexity of its dynamics spatially over the country. As we know, global warming is a real concern for humanity, and scientists from many disciplines have been working on its related issues for several decades now. In this study, we try to see the effects of global warming and climate shifts at various locations in India over time. For this, we do a sliding window analysis over the data from 1948-2020 and compute recurrence-based measures from recurrence plots and recurrence networks. The method of Recurrence Quantification Analysis(RQA) is well-accepted as a powerful tool for analyzing intricate patterns in data from various contexts<cit.>. Compared to the multifractal approach, this has the advantage that it can be used with small and nonstationary data. Hence this method is often adopted to study variations in complexity over time using a sliding window approach, where each window may have only a small length of data. The recurrence patterns in data are visualized as recurrence plots and represented as recurrence networks, and the quantifiers derived from them can provide information on transitions or changes in complexity over time<cit.>. This has been successfully applied to study transitions in dynamics using astrophysical data, climate data, financial data, etc. <cit.> Thus, the multifractal analysis provides the variations in climate dynamics spatially over the different climate zones or locations while recurrence analysis provides information on how the underlying dynamics changes or its complexity varies with time over the years in each location. The present study is based on the structure of the underlying dynamics rather than on the statistical features of data and thus is novel to climate related studies in India. As such, this will have interesting consequences and contribute much to our understanding of the dynamics of the climate system over the Indian subcontinent. § DATA AND PRE-PROCESSING We use reanalysis data sets from NCEP (National Centers for Environmental Prediction) gridded (2.5^∘× 2.5^∘) <(https://psl.noaa.gov/)>, of temperature and relative humidity for 15 locations across India over seventy years from 1948 to 2020. The 15 locations chosen for study are from the different climate zones of India: Mountain(Ladakh, Simla, Manali), Humid Sub-tropical(Patna, Mizoram, Delhi, Ranchi), Tropical Wet and dry(Bhopal, Pondicherry, Tirupati, Tawang), Semi-arid(Bengaluru) and Tropical wet(Mumbai, Kannur, Cochin). The data sets are first rescaled to unit interval for uniformity among them. Since the data may be contaminated with noise, each data is binned by averaging over every two points of the original data. As part of pre-processing, we also do detrending to remove significant trends in data. For this, we take its Fourier transform, and then each data is filtered by removing very small peaks that arise from noise and the dominant peaks, especially that corresponding to 365 days or yearly variations, to remove prominent periodic trends. § METHODOLOGY As the first step in the analysis, we reconstruct the underlying dynamics from each data using the method of delay embedding <cit.>. For this, the scalar discrete time series x(1), x(2), x(3),....x(N) is embedded in an M dimensional space by generating M-dimensional vectors with time delay coordinates, using a suitable time delay τ as, X̅ = [x(i),x(i+τ),...x(i+(M-1)τ] The embedding is effective for appropriate choices of M and τ. Following the standard procedure, we take the time when C(t) falls to 1/e as the appropriate delay time τ. The minimum embedding dimension M is estimated using the False Nearest Neighbours (FNN) method, where false neighbors are identified using a chosen threshold for distances between points in the reconstructed trajectory in M and M+1 dimensions. We confirm this by using a variant of this method where varying thresholds are used to estimate the false neighbors and to eliminiate noisy dynamics<cit.>. In our analysis, we take M=4, that is the maximum M obtained for the data sets used. Once embedded, we compute the correlation dimension D_2 on the reconstructed trajectory or attractor by estimating the scaling of the correlation sum with a chosen distance R and check for its saturation at M=4 for all data sets. Since the distribution of the trajectory points is such that the scaling in different regions of the attractor can be different, for a complete geometric characterization of the trajectory, we compute the set of generalized dimensions D_q and the associated multifractal spectrum or the f(α) spectrum<cit.>. The embedded attractor is first partitioned into M dimensional cubes of side r. If N(r) is the number of cubes required to cover the attractor, N_i the number of points in box i, then probability for occupation of i^th box is denoted as p_i(r) = N_iN The generalized dimensions D_q are then defined as<cit.>, D_q = 1q-1lim_R → 0log∑_i=1^N(r)p_i^qlog(r) In an analogous approach, we consider the scaling of p_i(r) ≈ r^α_i, where α_i is the scaling index for the i^th box. Then the number of boxes that have a scaling index within α and α+dα, scales as, n(α)≈r^-f(α) with f(α) as the characteristic exponent. The f(α) values lie on a single valued convex function between the two limits of α_min and α_max. The f(α) curve obtained can be fitted with a function with four independent parameters, that uniquely characterize the spectrum as<cit.> f(α)=A(α-α_min)^γ_1.(α_max-α)^γ_2. To compute f(α) from the data, the D_q-q spectrum is first determined and then using Legendre transformation α and f(α) values are computed. We use the fully automated algorithmic approach to compute the complete spectrum from the time series proposed in <cit.>, which has the advantage that the f(α) spectrum can be obtained without any intermediate subjective analysis. Moreover, the estimated γ_1, γ_2 , α_min and α_max values characterize the f(α) spectrum uniquely and give measures to quantify the complexity of the dynamics underlying the data. In addition, we compute Δα, which gives the range of scales and D_0, the maximum of the curve, as effective measures of complexity in the present context. With α_0 corresponding to D_0, the asymmetry parameter A defined as A=(α_0-α_min)/(α_max-α_0), is also computed from f(α) values for each data that give the left or right skewed nature of the multifractal curve. We do the sliding window analysis over the data to study variations in the complexity of climate dynamics over time. The recurrences of states in the reconstructed dynamics are captured as a 2-dimensional image, called a recurrence plot(RP). It is represented as a Recurrence matrix, R defined, as R_i,j=Θ(ϵ-||X_i-X_j||); i,j = 1...N, where N is the number of considered states X_i, ϵ is the threshold distance, ‖ . ‖ is a norm and Θ(.) is the Heaviside function. From the RPs constructed for each window of data, we compute two standard measures, Determinism(DET), defined as the fraction of recurrence points that form diagonal lines, and Laminarity (LAM) which gives the percentage of recurrent points in vertical structures using the equations<cit.>: DET = ∑_l=l_min^NlP(l)∑_l=1^NlP(l) LAM = ∑_v=v_min^NvP(v) ∑_v=1^NvP(v) Here l is the length of diagonal lines, v is the length of vertical lines and P(l) and P(v) represents their histograms. From the recurrence pattern of points on the reconstructed trajectory, the recurrence network(RN) is constructed by taking each point as a node and connecting two nodes by a link if the distance between them in the embedded space is ≤ϵ. Then adjacency matrix A of RN is obtained by removing the self-loops (diagonal elements) in R. A_ij =R_ij-δ_ij The standard complex network measures, Link Density(LD), and Characteristic Path Length(CPL)<cit.> are then computed within each window from the respective RNs as LD = 2MN(N-1) where M is the number of edges in the network and N is the number of nodes. CPL = 1N∑_i^N (1N-1∑_i j 1^N-1d_ij^s) where d_ij^s represents the shortest distance between nodes i and j. The variations of these recurrence based measures plotted over time can capture how the dynamics change over time in each location and bring out possible transitions in underlying dynamics. § MULTIFRACTAL NATURE OF CLIMATE DYNAMICS OF INDIA In this section, we present the main results from the multifractal analysis of climate data using the methods mentioned in the previous section. We obtain the phase space trajectories from temperature and relative humidity data for the 15 stations and compute their generalized dimensions and multifractal measures. The computed D_q vs. q plot of generalized fractal dimensions and f(α) spectra for all the stations are shown in Fig. <ref> and Fig. <ref>. The four quantifiers, α_min,α_max, γ_1 and γ_2, computed from the respective f(α) curves are given in tables <ref>, <ref> along with the Δα, D_0, α_0 and A values for relative humidity and temperature data of the 15 stations. In general, the nature of the multifractal curves in all cases indicate the chaotic nature of the underlying nonlinear dynamics of the climate system. The maximum value of the multifractal curve corresponding to α_0 is the fractal dimension D_0. The width of the spectrum, Δα, is the range of scales required for characterizing the dynamics and, as such, directly gives a measure of the complexity of the underlying dynamics. The asymmetry of the spectrum as given by values of A, indicates the right-skewed or left-skewed shape for the f(α) curve, respectively, while it is near 1 for symmetric shape. As reported, a left-skewed spectrum means less number of low fractal exponents and can correspond to the dominance of extreme events <cit.>, and a right-skewed spectrum is due to a larger number of high values of fractal exponents, which correspond to complex structures in the geometry of the dynamical states. From our study on temperature data, we find the multifractals have right-skewed nature for Bengaluru, Bhopal, Cochin, Delhi, Kannur, Ladakh, Mizoram, Patna, Pondicherry and Ranchi and left-skewed for Mumbai, Shimla, and Tirupati, and almost symmetric for Manali and Tawang. The multifractals of relative humidity data have a different structure with left-skewed nature for stations Bengaluru, Bhopal, Cochin, Delhi, Kannur, Ladakh, Mizoram, Patna, Pondicherry, Ranchi, Tawang, Tirupati. They are right- skewed for Manali but symmetric for Mumbai and Shimla. However, the extent of asymmetry varies among them, as is clear from the tables. Thus, for most of the stations, the dynamics underlying temperature data must be more complex compared to that of relative humidity. The values of A for all the stations from temperature and humidity data are shown in Fig. <ref>. § CLUSTERS BASED ON MULTIFRACTAL MEASURES To understand the similarities and variations among the multifractal natures of different stations, we use K-means clustering technique<cit.>, which helps to group these stations based on the measures computed from their multifractal curves. We run the clustering algorithm using the values of Δα and D_0 and find that the 15 stations can be grouped into 6 clusters in the plane (D_0, Δα) as shown in Figs. <ref> and <ref>. Based on values of Δα and D_0 from multifractal spectra of temperature data, we find the following six clusters: Cluster 1: Kannur, Ranchi, Tawang, Cluster 2: Manali, Mumbai, Delhi, Cluster 3: Ladakh, Cochin, Bengaluru, Tirupati, Pondicherry, Mizoram, Cluster 4: Shimla, Cluster 5: Patna, Cluster 6: Bhopal, as shown in Fig. <ref>. The stations in clusters based on multifractals derived from humidity data are Cluster 1: Patna, Mumbai, Delhi, Cluster 2: Pondicherry, Tirupati , Cluster 3: Ladakh, Cochin, Bengaluru, Kannur, Tawang , Cluster 4: Manali, Shimla, Cluster 5: Ranchi Cluster 6: Bhopal, Mizoram. These are shown in Fig. <ref>. We note that even locations that belong to different climate zones can be in the same cluster, which means that the variations in their temperature or relative humidity can have similar dynamics. § VARIABILITY IN CLIMATE DYNAMICS OVER TIME In addition to the complexity in dynamics, climate is known to undergo significant changes or regime shifts over time<cit.>. This can result in different dynamical variability ranges in space and time. To understand such variations over time in each location in more detail, we do a sliding window analysis of each embedded time series with a window size of 5000 data points that is slid with a length of 200 points. This gives us approximately 240 windows for each data. We construct recurrence plots and networks for each window<cit.> with the recurrence threshold of 0.2, which is 10% of the span of the embedded time series. We also choose every tenth point in the embedded time series to make the recurrence plots to make the recurrence plots more uniform as well as to save on computation time. It is found that this choice does not affect the recurrence measures we study. From the recurrence plot, we first perform the recurrence quantification analysis (RQA) and obtain the recurrence measures of Determinism(DET) and Laminarity(LAM). We also derive the corresponding recurrence networks, and compute the network measures Link density(LD) and Characteristic path length(CPL). The recurrence based measures characterize the recurrence pattern in the phase space trajectory. So their changes can be interpreted in terms of the relative changes in the variability in climate dynamics. As established, high values of the measures DET correspond to an increase in regularity for the underlying dynamics, while a decrease in their values indicates more irregular and stochastic variability in dynamics<cit.>. The changes in the measure LAM can be interpreted as changes from one irregular and chaotic state to another and can be correlated with changes in DET also in many cases. The recurrence network measure LD increases when the dynamics become more regular and periodic but decreases for irregular and chaotic dynamics. On the other hand, the CPL values are small for regular dynamics and high for more complex and irregular dynamics. The variations over time for the measures in all stations share common trends. However, the extent of changes and the year in which changes start differ among them. We group the stations as those having similar changes in behavior. The typical plots from each group showing the variations in measures DET, LAM, CPL and LD are given in figures Figs. <ref> and <ref>. Here, the average values of measures from four consecutive windows are plotted with their standard deviation shown as error bar, and the time points shown correspond to the middle of these. The resulting time series of the measures for the period of 73 years of data are presented where each point indicated is within an uncertainty of ± 4 years, and two consecutive points have a gap of one year. We use the modified Kendall tau test to estimate the significance of the variations in measures over time, considering increases or decreases with a p-value < 0.05. The variations in DET for stations Cochin, Mumbai, Kannur, and Bengaluru are qualitatively similar and hence values for Cochin shown in Fig as a typical case. The DET values decrease around the 60s, followed by an increase and another gradual decrease with a minimum around the 1980s and a deeper minimum in the late 90s. The DET values for Delhi have high values but small variability. This is shared by Mizoram, Patna, and Tawang, with a slight decrease that happened during 1986-1999. The variations in Manali, shared by Shimla and Ladakh, show small dips before 1970 and after 1990. The values for Ranchi and Bhopal share almost similar variations with typical drops around the late 1970s, 1980s, and after 2000. The southeastern coastal regions represented by Tirupati and Pondicherry have similar characteristic DET pattern. First, during 1974-1984 the DET values increased for both of them, but soon after, around 1990, a decrease in DET occurred. In general, the changes in DET and LAM are correlated, with high DET corresponding to high LAM values and vice versa. We can understand the decrease in DET or LAM as change in dynamics to more irregular and chaotic dynamics. The exception observed is for Delhi, with a drop in LAM values in the 50s, around the 80s, and 1990s, when DET values did not have a significant drop. This can be due to a faster time scale settling in the dynamics while remaining more or less regular. The values of LAM for Manali show a more prominent decrease around the 1980s compared to Ladakh and Shimla, but the time when these changes occur is very similar. The data from stations like Delhi, Tawang, Patna, and Mizoram have CPL values that show an increase after the 1970s and a drop during the 90s. At the same time, that for Tirupati and Pondicherry increase around 1970, decrease around the 1990s, and again increase around 2000. The stations Cochin, Bengaluru, Kannur, and Mumbai also have trends that are qualitatively similar to that of Tirupati, but they start with a more significant fall in the 50s. The values for Ladakh and Shimla show a descending trend until the 60s and then increase to have a simultaneous peak in the late 80s. Later Ladakh maintains a higher CPL value together with Shimla while Manali settles on to a lower value of CPL in temperature. The CPL values of stations like Ranchi and Bhopal have an initial peak, and the central peak stays for a longer time. The values of LD have higher values at the two ends of the period of study and consistently small values in between, with minor variations. The minima of LD in all cases occur within the period 1975-1993, which is well correlated with the CPL values. In general, the higher values of CPL along with smaller values of LD in the middle of the timeframe, indicate more irregular and chaotic behavior during this period. Using Pettitt’s test<cit.>, we confirm the breakpoint of this transition as occurring between 1975-1996, mostly around 1985 for all of the stations, with an average p-value of 0.0017. We find that the possible groups of the stations based on the measures from relative humidity data are different with changes for one or two stations. The typical plots from each group in this case based on the variations in measures DET, LAM, CPL, and LD are shown in Figs. <ref> and <ref>. Based on the values of DET (Fig. <ref>), we find that for stations Delhi, Tawang, and Mizoram, the variations are much less, with a slight decrease in the late 1960s till the mid-1970s and an increase from there towards the 2000s. The values for Bhopal and Patna exhibit highly correlated changes in DET with a dip in the same period 1971-1979 that increases after that and again drop during 1990-2000. The stations Cochin, Kannur, and Bengaluru share similar variations, with a decrease in 1980. However, after that, for Cochin, DET starts to increase with a local maximum around the late 1980s, but Kannur and Bengaluru show the same maxima with a delay around 1995. Also, Pondicherry and Tirupati continue to show similar trends with a significant dip during the period 1990s-2000s. The slight dips in DET are much more pronounced in the measures of LAM. The stations Cochin, Kannur, Mumbai, and Bengaluru share one significant dip in the period 1963- 1983. For Tirupati and Pondicherry, the significant decrease is around 1972-1980. For Patna and Bhopal, there is a decrease between 1967 and 1980. For Delhi, even in the case of relative humidity, we observe that the LAM has more significant dips even though DET remained relatively constant. We carry out the same analysis for the values of CPL from relative humidity data(Fig. <ref>). Unlike the case of temperature, humidity gives a flatter rise in the middle, with multiple peaks. From the Pettitte test, we see that the breakpoints range from the early 1970s to 1995, and the average breakpoint occurred in 1979, and for all the stations, we get an average p-value of 0.00082. The LD values show lower values during the period 1975-1993. Here also, the LD values tend to be lower when CPL is higher, indicating more complex and irregular dynamics in this phase. Thus, the variations in CPL and LD in the case of temperature and humidity data show significant shifts in the dynamics as occurring during 1970-1990, while the variations in DET and LAM help to identify the stations that have qualitatively similar variations in dynamics during the period of study. § SUMMARY AND CONCLUSION India is known to have heterogeneity in climatic conditions in the different parts of the country. Therefore, to understand the varying climate conditions and their complexity, the long-term meteorological time series from the different locations are to be analyzed and compared for their underlying dynamical nature and variations. In the present study, we report the analysis of temperature and relative humidity for 15 different locations in India using the methods of multifractal analysis and recurrence analysis. Our results, in general, indicate the nonlinear and complex dynamics underlying the variations in climate over the country. The study of recurrence patterns helps to understand the transitions in the nature of dynamics due to global warming and regime shifts. The complexity measures obtained from the multifractal spectra reveal considerable differences between locations. However, stations with similar multifractal behavior, as indicated by their range of scales and fractal dimension, could be grouped into six clusters that differ in temperature and relative humidity. The locations in each cluster are mostly similar in the complexity of their dynamical variations but are different from those in other clusters. Moreover, the values of the asymmetry parameter indicate the extent of complexity of the underlying dynamics, with the left-skewed nature corresponding to less complex and right-skewed to more complex dynamics. We identify and group the stations with left-skewed and less complex dynamics and those with right-skewed or more complex dynamics. These variations in the multifractal properties indicate the variability among different geographical locations in the climate zones of India and demonstrate the climate diversity over the country. We supplement our study using recurrence based analysis using a sliding window approach over time. The measures thus computed from recurrence plots and networks reveal the variations in complexity over time in each location. Our study indicates statistically significant variations in the climate dynamics during the period 1980s-2000s. We use the measures from recurrence plots, DET, and LAM to identify stations with qualitatively similar changes in dynamics that occurred around 1970, 1980, and 1990. We note that the surface temperature in India is shown to have a significant increasing trend, and structural changes are observed in the temperature data with breaks between 1970 and 1980<cit.>. However, our study on temperature and humidity data also indicates corresponding changes in the underlying dynamics and how they connect and differ over the various locations in the country. We analyze the measures from recurrence networks, CPL and LD, in a relative manner and identify major transitions in dynamics around 1970 to a state of more complex and irregular nature and back to a less complex nature after 1990. These changes can be understood as consequences of the global climate regime shifts and structural changes in the mid-1970s, late 1980s, and late 1990s, well documented in earlier studies<cit.>. We note that there are multiple mechanisms that can cause changes in climate dynamics, like changes in sea surface temperature, increase in greenhouse gases, urbanization, industrialization, etc. Therefore, the results of the study will be beneficial as input data in many related studies on climate change, hydrology, crop models, rainfall, and precipitation. § DATA AVAILABILITY The data used in the study are the reanalysis data sets from NCEP (National Centers for Environmental Prediction) downloaded from <https://psl.noaa.gov/>. § ACKNOWLEDGMENTS Two of the authors (JJB and JD) acknowledge IISER Tirupati for facilities for their MS thesis and project works. *
http://arxiv.org/abs/2307.02074v2
20230705073127
Arbitrageurs' profits, LVR, and sandwich attacks: batch trading as an AMM design response
[ "Andrea Canidio", "Robin Fritsch" ]
cs.DC
[ "cs.DC", "econ.TH" ]
Arbitrageurs' profits, LVR, and sandwich attacks: batch trading as an AMM design response.We are grateful to Felix Leupold and Martin Köppelmann for initial discussions on batch trading on AMM that led to the writing of this paper. We also thank Haris Angelidakis, Eric Budish, Agostino Capponi, Felix Henneke, Fernando Martinelli, Ciamac Moallemi, Andreas Park, and Anthony Lee Zhang for numerous comments and suggestions. Andrea CanidioCorresponding author; CoW Protocol; andrea@cow.fi and Robin FritschETH Zurich and CoW Protocol; rfritsch@ethz.ch August 1, 2023 ======================================================================================================================================================================================================================================================================================================================================================================================================================================== August 1, 2023 We consider an automated market maker (AMM) in which all trades are batched and executed at a price equal to the marginal price (i.e., the price of an arbitrarily small trade) after the batch trades. We show that such an AMM is a function maximizing AMM (or FM-AMM): for given prices, it trades to reach the highest possible value of a given function. Competition between arbitrageurs guarantees that an FM-AMM always trades at a fair, equilibrium price, and arbitrage profits (also known as LVR) are eliminated. Sandwich attacks are also eliminated because all trades occur at the exogenously-determined equilibrium price. We use Binance price data to simulate the lower bound to the return of providing liquidity to an FM-AMM and show that this bound is very close to the empirical returns of providing liquidity on Uniswap v3 (at least for the token pairs and the period we consider). Keywords: Arbitrage profits, Loss-vs-Rebalancing (LVR), MEV, Sandwich attacks, AMM, Mechanism design, Batch trading Arbitrageurs' profits, LVR, and sandwich attacks: batch trading as an AMM design response.We are grateful to Felix Leupold and Martin Köppelmann for initial discussions on batch trading on AMM that led to the writing of this paper. We also thank Haris Angelidakis, Eric Budish, Agostino Capponi, Felix Henneke, Fernando Martinelli, Ciamac Moallemi, Andreas Park, and Anthony Lee Zhang for numerous comments and suggestions. Andrea CanidioCorresponding author; CoW Protocol; andrea@cow.fi and Robin FritschETH Zurich and CoW Protocol; rfritsch@ethz.ch August 1, 2023 ======================================================================================================================================================================================================================================================================================================================================================================================================================================== § INTRODUCTION Constant Function Automated Market Makers (CFAMMs) are the centerpiece of decentralized finance. Their popularity is largely due to their simplicity: the price at which a given CFAMM is willing to trade depends exclusively on the size of its liquidity pools. One important consequence is that trades occurring on the same CFAMM within the same block pay different prices depending on the order in which they are executed. This design has two well-recognized flaws. First, liquidity providers (LPs) trade at a loss whenever there is a rebalancing event. More precisely, when the underlying value of the assets changes, the first informed arbitrageur who trades with the CFAMM earns a profit by aligning the CFAMM price with the new equilibrium price. These profits are at the expense of LPs, who suffer a “loss-vs-rebalancing” (LVR). Second, traders are routinely exploited by attackers, most commonly via sandwich attacks in which an attacker front-runs a victim's swap with the same swap and then back-runs it with the opposite swap. Doing so allows the attacker to “buy cheap” and “sell expensive” while forcing the victim to trade at less favorable terms. This paper proposes a novel AMM design that avoids both problems. In its simplest form, we propose that all trades that reach the AMM during a period are batched together and executed at a price equal to the new marginal price on the AMM – that is, the price of executing an arbitrarily small trade after the batch trades. We derive the trading function of such an AMM and show two interesting equivalences. First, this AMM is function maximizing because, for given prices, it maximizes the value of a given function subject to a budget constraint. For this reason, we call our design a function-maximizing AMM, or FM-AMMHaris: Any reason we write CPAMM while we also write FM-AMM? I would expect (CP-AMM, FM-AMM) or (CPAMM, FMAMM). Also, if the function is a standard Cobb-Duglas objective function (i.e., the weighted sum of two natural logs), then for given prices, the FM-AMM LPs run a passive investment strategy: absent trading fees, the total value of the two pools is shared between each pool according to some pre-specified weights. Finally, we show that an FM-AMM does not satisfy path independence: traders can obtain a better price by splitting their trades into smaller orders, which is why batching is required. Our main contribution is to consider the behavior of such an AMM in the presence of arbitrageurs, who have private information relative to the equilibrium prices (determined, for example, on some very liquid off-chain location). Competition between arbitrageurs guarantees that the batch always trades at the equilibrium price, and arbitrage profits are eliminated. Intuitively, if this were not the case, some arbitrageurs would want to trade with the batch and, by doing so, would push the price on the batch in line with the equilibrium. This also eliminates all forms of MEV extraction, such as, for example, sandwich attacks: arbitrageurs will always act so to remove deviations from the equilibrium price, therefore making it impossible to manipulate the FM-AMM price. The benefit of contributing liquidity to an FM-AMM relative to a traditional CFAMM is that FM-AMM LPs earn the arbitrage profits generated by rebalancing the CFAMMs. Because these arbitrage profits are larger for more volatile prices (as they lead to more frequent and larger rebalancing, see and ), holding everything else equal, the benefit of providing liquidity to an FM-AMM relative to a CFAMM increases with the price volatility. We then use price data to simulate the return of providing liquidity on an FM-AMM, and compare the simulated return with that earned by liquidity providers on Uniswap v3 (currently the most important AMM). To do so, we consider an FM-AMM that does not charge fees for including an order on the batch, but only on the net amount that is then settled on the FM-AMM. Such FM-AMM earns fees from arbitrageurs but not from noise traders, because arbitrageurs always absorb trades from noise traders before they reach the FM-AMM. It is a valuable benchmark because we can use historical price data to compute the trade necessary to align the FM-AMM price with the new reference price, the resulting trading fees, and the evolution of the liquidity pools. Hence, our results do not rely on assumptions about the size and distribution of noise trades reaching the FM-AMM. Of course, the caveat is that the estimated return to providing liquidity to an FM-AMM is the lower bound of a more general case in which noise trades generate fees. The first part of our empirical analysis compares the historical returns of providing liquidity to some Uniswap v3 pools to a counterfactual in which the same liquidity is contributed to our simulated FM-AMM, under the assumption that the FM-AMM rebalances at the same frequency and charges the same fee as the Uniswap v3 pool.[Note that Uniswap v3 is characterized by having concentrated liquidity: each LP provides liquidity over a price range and earns returns only if the price is within this range. In our comparison, we use the empirical distribution of liquidity and consider the return of an arbitrarily small non-concentrated liquidity position (i.e., a position over the entire price range [0,∞]). For the FM-AMM, we also assume a liquidity position that is non-concentrated. Whether or not the rest of the liquidity provided to the FM-AMM is concentrated (and how) is irrelevant to our results. ] Because the FM-AMM always trades at the equilibrium price, the return on providing liquidity to the FM-AMM is determined by the variation in the price of the underlying assets (which can be fully hedged, see ). The return on providing liquidity on Uniswap v3 also depends on the price variation, plus the fees earned from noise traders and minus arbitrage profits. Comparing the return of providing liquidity on the two AMMs allows us to establish whether, during a given period and for a given token pair, Uniswap LPs earned more in trading fees from noise traders than their loss to arbitrageurs.[ Several authors studied whether providing liquidity on Uniswap is profitable, see <cit.>, <cit.>, <cit.>. The main difference between these papers and ours is that we compare Uniswap LP returns to a different benchmark (here FM-AMM LP returns, in those papers, a holding strategy). In this respect, our strategy is similar to <cit.>, which we discuss in more detail later. ] Our results are mixed: whether providing liquidity to our simulated FM-AMM generates higher returns than providing the same liquidity to Uniswap v3 depends on the token pair and the period we consider. However, these returns are similar: at the end of the 6-months period we consider, the differences in returns between an FM-AMM and Uniswap v3 across the three pools we study are -0.22% (for the ETH-USDT pool), 0.03% (for the BTC-USDT pool) and 0.11% (for the ETH-BTC pool). Also, during the period we consider, the maximum difference in value between the two liquidity positions (relative to their initial values) is 0.30% (for the ETH-USDT pool), 0.14% (for the BTC-USDT pool), and 0.11% (for the ETH-BTC pool). We conclude that the lowest bound on the return to providing liquidity to an FM-AMM is similar to the empirical return to providing liquidity to Uniswap v3. Our result also shows that on Uniswap v3, fees from noise traders are approximately equal to arbitrageurs' profits.[<cit.> and <cit.> argue that liquidity provision is strategic: the size of liquidity pools is smaller when arbitrageurs' profits are higher. The intuition is that when there is a rebalancing event, the loss to arbitrageurs per unit of liquidity is independent of the size of the liquidity pools. At the same time, the size of the liquidity pools determines the fraction of the revenues from noise traders earned by each unit of liquidity. Hence, the endogenous response of liquidity providers may explain why we find that fees from noise traders are approximately equal to arbitrageurs' profits. ] The remainder of the paper is organized as follows. We now discuss the relevant literature. In Section <ref>, we introduce the FM-AMM in its simplest form with a product function and zero fees. In Section <ref>, we discuss several extensions, including fees and the fact that an FM-AMM violates path dependence and hence requires batching. In Section <ref> we consider the behavior of an FM-AMM in the equilibrium of a game with informed arbitrageurs and noise traders. Section <ref> contains the empirical analysis. The last section concludes. All proofs and mathematical derivations missing from the text are in the appendix. Relevant literature. Several authors argued that AMM's design allows informed arbitrageurs to profit at the expense of LPs. <cit.>, <cit.>, and <cit.> provide theoretical models that illustrate this possibility. In particular, <cit.> consider a continuous time model with zero fees and derive a closed-form formula to measure LPs returns and the cost they face when trading with informed arbitrageurs (which they call loss-vs-rebalancing or LVR). <cit.> extend this analysis to the case of discrete-time and strictly positive trading fees. They use the term arbitrageur profits to indicate LPs losses, a term we adopt because both our model and our empirical analysis are in discrete time and have fees. <cit.> and <cit.> draw the implication of this cost for liquidity provision. A second important limitation of CFAMM is that they enable sandwich attacks (see ). These attacks are quantitatively relevant. For example, <cit.> collected on-chain data from the inception of Ethereum (July 30, 2015) until November 21, 2020 and estimated that sandwich attacks generated 13.9M USD in profits. <cit.> consider a later period (from the 1st of December 2018 to the 5th of August 2021) and find that sandwich attacks generated 174.34M USD in profits. Our design eliminates these attacks. We are therefore related to the growing literature proposing mechanisms to prevent malicious re-ordering of transactions (of which sandwich attacks are an example), especially those that can be implemented at the smart-contract level (, , , ).[Another strand of the literature studies how to prevent malicious re-ordering of transactions by modifying the infrastructure that underpins how transactions are sent. See, for example, <cit.> and the literature review in <cit.>. ] Several initial discussions on designing “surplus maximizing” or “surplus capturing” AMMs occurred informally on blog and forum posts (see , , ). <cit.> provides an axiomatic derivation of the surplus-maximizing AMM. Relative to their work, our contribution is to place this new type of AMM in a context with arbitrageurs and other trading venues. <cit.> also study AMM from an axiomatic viewpoint. In particular, they discuss path independence, which FM-AMMs violate. The intuition for our main result is closely related to <cit.>, who study the batching of trades in the context of traditional finance as a way to mitigate the high-frequency-trading (HFT) arms race and protect regular (or slow) traders. The main result is that batching trades force informed arbitrageurs to compete in price instead of speed, because the priority of execution within the batch is given based on price. The intuition in our model is similar, although competition between arbitrageurs on the batch is rather in quantity than in price: if the price on an FM-AMM differs from the equilibrium price, competing arbitrageurs will submit additional trades to exploit the available arbitrage opportunity, but by doing so, they push the price on the FM-AMM in line with the equilibrium. We conclude by noting that an FM-AMM is also an oracle: it exploits competition between arbitrageurs to reveal on-chain the price at which these arbitrageurs can trade off-chain. It is, therefore, related to the problem of Oracle design (as discussed, for example, by ). § THE FUNCTION-MAXIMIZING AMM In this section, we first introduce the main concepts of interest using a simple constant-product function (both for the CFAMM and the FM-AMM), no fees, and keeping formalities to the minimum. In the next section, we generalize our definitions and results and introduce additional elements. As a preliminary step, we derive the trading function of a constant product AMM, the simplest and most common type of CFAMM. Suppose that there are only two tokens, ETH and DAI. A constant-product AMM (CPAMM) is willing to trade as long as the product of its liquidity pools remains constant (see Figure <ref> for an illustration). Call Q^$ and Q^E its initial liquidity pools in DAI and ETH, respectively, and p^CPAMM (x) the average price at which the CPAMM is willing to trade x ETH, where x>0 means that CPAMM is selling ETH while x<0 means that the CPAMM is buying ETH. For the product of the liquidity pools to be constant, it must be that Q^$· Q^E = (Q^$+p^CPAMM (x) x) (Q^E -x) or p^CPAMM (x)=Q^$/Q^E-x. Note that the marginal price of a CPAMM (i.e., the price to trade an arbitrarily small amount) is given by the ratio of the two liquidity pools. The key observation is that, in a CPAMM, a trader willing to trade x pays a price that is different from the marginal price after the trade. This is precisely the reason why arbitrageurs can exploit a CPAMM: an arbitrageur who trades with the CPAMM to bring its marginal price in line with some exogenously-determined equilibrium price does so at an advantageous price (and hence makes a profit at the expense of the CPAMM). Instead, in the introduction, we defined an FM-AMM as an AMM in which, for every trade, the average price equals the marginal price after the trade (we may call this a clearing-price-consistent AMM). For ease of comparison with the CPAMM described earlier, suppose that the FM-AMM function is the product of the two liquidity pools, and hence, that its marginal price is the ratio of its liquidity pools. The price function p(x) for buying x ETH on the FM-AMM is implicitly defined as p(x) = Q^$ + x · p(x)/Q^E-x, where the RHS of the above expression is the ratio of the two liquidity pools after the trade. Solving for p(x) yields: p^FM-AMM (x) ≡ p(x) = Q^$/Q^E-2x, which implies that the FM-AMM marginal price is, indeed, the ratio of the liquidity pools. Hence, a given trade on the FM-AMM generates twice the price impact than the same trade on the traditional CPAMM (cf. the expression for p^CPAMM (x)). Interestingly, an FM-AMM can also be seen as a price-taking agent maximizing an objective function. If its objective function is the product of the two liquidity pools, then for a given price p the FM-AMM supplies x ETH by solving the following problem: x^FM-AMM(p)=_x { (Q^E-x)(Q^$+p· x) }. It is easy to check that the FM-AMM supply function is: x^FM-AMM(p) = 1/2(Q^E -Q^$/p). Hence, to purchase x ETH on the FM-AMM, the price needs to be, again: p^FM-AMM(x) = Q^$/Q^E-2x. It follows that, whereas a traditional CPAMM always trades along the same curve given by Q^$ Q^E, the FM-AMM trades as to be on the highest possible curve. With some approximation, we can therefore see an FM-AMM as a traditional CPAMM in which additional liquidity is added with each trade. See Figure <ref> for an illustration. Haris: Felix pointed that CPAMM with fees has a somewhat similar effect of changing the curve. Is this discussed anywhere? A final observation is that the FM-AMM's trading function is equivalent to p· (Q^E-x^FM-AMM(p))=Q^$+p · x^FM-AMM(p). In other words, for a given p, the value of the two liquidity pools is equal after the trade. Therefore, the FM-AMM is trading to implement a passive investment strategy, in which the total value of the two pools is equally split between the two assets (that is, a passive investment strategy with weights 1/2, 1/2). It is easy to check that the FM-AMM can implement any passive investment strategy with fixed weights (α, 1-α) by specifying the objective function as (Q^E)^α (Q^$)^1-α for an appropriate α∈ (0,1). § ADDITIONAL CONSIDERATIONS §.§ Generalization of definitions and results We now generalize our results. First of all, an AMM is an entity that accepts or rejects trades based on a pre-set rule. Such rule can be derived from the AMMs liquidity pools (Q^$,Q^E) ∈ℝ^2_+ and the AMM function Ψ: ℝ^2_+ →ℝ. We assume that the AMM function is continuous, it is such that Ψ(Q^$,0)=Ψ(0,Q^E)=Ψ(0,0) for all Q^$ Q^E, that it is strictly increasing in both its arguments whenever Q^$>0 and Q^E>0, and that it is strictly quasiconcave. The difference between different types of AMMs is how the function Ψ(.,.) and the liquidity pools (Q^$,Q^E) determine what trades will be accepted and rejected by the AMM. For given liquidity pools (Q^$,Q^E) and function Ψ: ℝ^2_+ →ℝ, a constant function automated market maker (CFAMM) is willing to trade x for y=p(x) xHaris: this also shows up in def 2 and 3, but I am not sure I agree with writing p as a function of x. Reason is that here you try to define the pairs (x,y) that are accepted by the AMM, while if you write y = p(x) x, it means already that if i fix an x, then y is also fixed. Not sure if that makes sense. In other words, I would say that x and y are the free variables here and p is simply y/xFelixH: working with x and p sounds fine to me here. but it should be noted somewhere that this is not a general setting for CFAMMs, since in general they can handle more than 2 assets and are thus not easily expressed in terms of prices if and only if Ψ(Q^$ + p(x) x,Q^E-x) = Ψ(Q^$,Q^E). Our first goal is to define an AMM that is clearing-price-consistent in the sense that, for every trade, the average price of the trade equals the marginal price after the trade. For given liquidity pools (Q^$,Q^E) and function Ψ: ℝ^2_+ →ℝ, let p_Ψ^margin(Q^$,Q^E) = ∂Ψ(Q^$,Q^E) /∂ Q^E/∂Ψ(Q^$,Q^E) /∂ Q^$ be the marginal price of the AMM for reserves Q^$,Q^E. A clearing-price consistent AMM is willing to trade x for y=p(x)x if and only if p(x) = p_Ψ^margin(Q^$+p(x)x, Q^E-x ). Note that, given our assumptions on Ψ(.,.), whenever Q^$>0 and Q^E>0, the marginal price is strictly increasing in the first argument and strictly decreasing in the second argument, converges to zero as Q^E →∞ or Q^$→ 0, and to infinity as Q^E → 0 or Q^$→∞. Second, we define a function-maximizing AMM (FM-AMM) that maximizes the objective function instead of keeping it constant: For given liquidity pools (Q^$,Q^E) and function Ψ: ℝ^2_+ →ℝ, a function-maximizing AMM is willing to trade x for y=p(x) · x if and only if p(x)=x^-1(p), where x(p) := _x {Ψ( Q^$+p · x, Q^E-x ) }. The next proposition establishes the equivalence between clearing-price-consistent and function-maximizing AMMs. For given liquidity pools (Q^$,Q^E) and function Ψ: ℝ^2_+ →ℝ, an AMM is function maximizing if and only if it is clearing-price consistent. Under our assumptions, solving (<ref>) is equivalent to satisfying the first-order condition, which is equivalent to (<ref>). §.§ Path-dependence (or why batching trades is necessary) CFAMMs (without fees) are path-independent: splitting a trade into multiple parts and executing them sequentially does not change the average price of the trade. This property does not hold for an FM-AMM because traders can get better prices by splitting their trade. In fact, they can get approximately the same price as on the corresponding CFAMM by splitting their trade into arbitrarily small parts. This is why an FM-AMM's trading function can be implemented only if trades are batched. To see this, note that a trade on the FM-AMM with product function changes the reserves as follows: (Q^$, Q^E) → (Q^$(Q^E-x/Q^E-2x), Q^E-x) By instead splitting the trade into smaller parts ∑_i=1^n x_i = x and executing them sequentially, the reserves of the FM-AMM will change to (Q^$∏_i=1^n (Q^E-∑_j=1^i-1x_j)-x_i/(Q^E-∑_j=1^i-1x_j)-2x_i, Q^E - ∑_i=1^n x_i). Setting x_i=1/nx and letting n→∞ leads to the DAI reserves after the trade being lim_n→∞ Q^$Q^E-1/nx/Q^E-n+1/nx = Q^$Q^E/Q^E-x This term exactly equals the DAI reserve of a CPAMM after these trades. Hence, to have an FM-AMM, it is necessary to prevent splitting orders by imposing the batching of trades. [inline]generalize this to an FM-AMM derived from an arbitrary trading function Ψ. If we ever add this, we could use a second-order taylor expansion. The first term is the same between CFAMM and FM-AMM, while the second differs. However, as trades are split into smaller and smaller chunks, the second term vanishes to zero (perhaps!). §.§ Fees An important design choice is the fee structure. An FM-AMM can charge fees in at least two ways: a fee could be charged for including a trade on the batch, and an additional one could be charged on the trades not netted out on the batch and hence are settled on the FM-AMM. The difference between the two fees is that some of the trades may be netted already on the batch without ever reaching the FM-AMM. In what follows, we assume that the fee for inclusion in the batch is zero, while there could be a strictly positive fee on trades settled on the FM-AMM. Theoretically, this is the simplest case and the one we will consider in the empirical analysis. The reason is that fees earned from noise traders are zero, and we will not need to make any assumption concerning the frequency and distribution of these trades. But our results continue to hold when there are positive fees for inclusion in the batch. The FM-AMM also needs to decide on which currency to charge the fee. For ease of comparison with Uniswap (the most important and liquid CFAMM), we assume that fees are specified in the sell tokens (i.e., the input token from the AMM perspective). Hence, if there is a fee τ then an order for x ETH that is settled on the FM-AMM pays x · p^FM-AMM(x)/(1-τ) DAI, while a sell order for x ETH that is settled on the FM-AMM receives x(1-τ) · p^FM-AMM(x(1-τ))) DAI. We can therefore define the effective price as p̃(x, τ) ≡ p^FM-AMM(x)/1-τ= Q^$/(1-τ)(Q^E-2x) x> 0 (1-τ) · p^FM-AMM(x(1-τ))=Q^$/Q^E/(1-τ)-2x x < 0 [ Q^$(1-τ)/Q^E, Q^$/(1-τ)Q^E] x = 0 [inline]At some point we should work out the case in which there is a positive fee for inclusion in the batch but no additional fee for settling the batch. I think, in this case, the effective price depends on the sign of the trade and also on the sign of the batch: consider all trades x_i. Note that if x_i<0, then only x_i (1-τ) will reach the batch. The total batch size is, therefore X=∑_x_1 >0 x_i + (1-τ) ∑_x_i<0 x_i. The internal price is p^FM-AMM(X). Then, the effective price will be different depending on the sign of the trade: p̃(x_i, τ) ≡ p^FM-AMM(X)/1-τ x_i> 0 (1-τ) · p^FM-AMM(X) x_i < 0 We interpret the terms Q^E/(1-τ) and Q^$/(1-τ) as the FM-AMM's effective reserves. Hence, the fee causes the FM-AMM to behave as if it had more of the token that traders want to sell to the FM-AMM. Also, a positive-fee FM-AMM remains a function maximizing AMM, but the objective of the maximization depends on the sign of the trade. That is p̃(x, τ) = x^-1(p, τ) and x(p, τ)=_x { U(x,p, τ) } is this still a continuous function (one of the assumptions above)? where U(x,p, τ) =(Q^E -x) ·(Q^$/1-τ +p· x ) x>0 (Q^E/1-τ -x) ·(Q^$ +p · x ) x<0 Q^E · Q^$ x=0 See Figure <ref>. There is a range of effective prices at which the FM-AMM will not want to trade, and the size of this range is increasing in τ. This is important whenever the trades on the batch are fully netted out within the batch. In this case, we assume that the FM-AMM price is such that the traders' net demand is zero. A final observation is that the fee τ also affects the elasticity of the effective price to the size of the trade |x|. The reason is that only the fraction of the trade not paid as a fee generates a price impact. Hence, a higher fee implies a smaller price impact. §.§ Other design choices: enforcing batching and frequency of rebalancing Implementing an FM-AMM requires making several additional design choices. Here we briefly discuss two: how to enforce the batching of trades and the frequency of rebalancing of the FM-AMM. In what follows, we will assume that the FM-AMM enforces batching by collecting intentions to trade off-chain and settling them on-chain at regular intervals, with all trades settled simultaneously facing the same prices.[This process is modeled around CoW Protocol (<www.cow.fi>). CoW Protocol collects intentions to trade off-chain, which are then executed as a batch. Cow Protocol enforces uniform clearing prices so that all traders in the same batch face the same prices.] However, there could be other ways to enforce batching, for example, by leveraging proposer-builder separation (or PBS). In PBS, block builders (entities that assemble transactions in a block that are then forwarded to a proposer for inclusion in the blockchain) could compute the net trades that will reach the FM-AMM during that block. Builders will then include a message announcing this value at the beginning of the block, which the FM-AMM uses to compute the price at which all trades will be executed. If the proposer's announcement turns out to be correct at the end of the block, the FM-AMM will reward the builder (punishments can also be introduced if the block builder report is incorrect, see ). The frequency of rebalancing is a design choice because of path dependence. For example, an FM-AMM earns more when it settles one large batch instead of two smaller batches trading in the same direction. In this case, therefore, less frequent batches may be beneficial. However, settling a single large batch, in which opposite trade demands net out, may generate little or no trade (and hence little or no benefit to the FM-AMM), while settling two smaller batches trading in opposite directions moves the FM-AMM “up the curve” each time. In this case, more frequent batches are more beneficial. In the theoretical analysis, to easily compare an FM-AMM with a traditional CPAMMs, we will assume that the FM-AMM rebalances each block. However, in the empirical analysis, we will compare the performance of an FM-AMM at different rebalancing intervals. § THE MODEL Equipped with the full description of an FM-AMM, we can now study its behavior in an environment with traders and arbitrageurs. We limit our analysis to the product function. The game comprises n noise traders, each wanting to trade a_i units of ETH. We adopt the convention that if a_i>0 trader i wants to buy ETH, while if a_i<0 trader i wants to sell ETH. Call A=∑_i a_i the aggregate demand for ETH from noise traders, assumed small relative to the FM-AMM liquidity pools Q^E and Q^$.[In practice, we assume that trading A on the FM-AMM has a negligible price impact. Else, we should treat orders from noise traders as limit orders. In this case, all our results continue to hold at the cost of additional notation.] Next to traders, a large number of cash-abundant, competing arbitrageurs, who can trade as part of the batch and on some external trading venue, assumed much larger and more liquid than the combination of our n traders and the FM-AMM. The equilibrium price for ETH on this external trading venue is p^* and is unaffected by trades on the FM-AMM. Arbitrageurs aim to profit from price differences between the FM-AMM and the external trading venues. Arbitrage opportunities will be intertemporal (over short intervals). Hence, for ease of derivations, we assume that arbitrageurs do not discount the future. The timing of the game is discrete. Even periods are when on-chain transactions occur. Even periods are, therefore, better interpreted as different blocks. Odd periods are when off-chain events occur. In these periods, first, the equilibrium price p^* may change, and then traders/arbitrageurs can submit trades for inclusion in the batch. We are now ready to derive our main proposition. Suppose that, at the end of an even period, the pools of the FM-AMM are Q^E and Q^$. In the equilibrium of the subsequent odd period, after p^* is realized, noise traders will collectively submit trade A to the batch, and arbitrageurs will collectively submit trade y(p^*) such that p̃(A +y(p^*), τ)=p^*. The proof of the proposition distinguishes between two cases. The first is p^* > Q^$/(1-τ)Q^E or p^* < (1-τ) Q^$/Q^E, so that A+y(p^*) ≠ 0. There is positive trade on the FM-AMM in equilibrium, and the FM-AMM unique effective price is exactly the equilibrium price p^*. In this case, we say that there was a rebalancing event. The second case is p^*∈ [Q^$/(1-τ)Q^E, (1-τ) Q^$/Q^E], in which case there is no trade settled on the FM-AMM, and p^* is one of the possible effective prices. Note, however, that p^* is the only price at which arbitrageurs are indifferent and y(p^*)=-A (i.e., at any other price [Q^$/(1-τ)Q^E, (1-τ) Q^$/Q^E] there is an unmet demand or supply for ETH). Hence, if the equilibrium price is sufficiently far from the FM-AMM marginal price Q^$/Q^E, the FM-AMM will be rebalanced so that its effective price equals the new equilibrium price. Otherwise, no trade is settled on the FM-AMM (that is, all trades are netted out already on the batch). In either case, the effective price on the FM-AMM is p^* and the amount settled on the FM-AMM is independent of the trades from noise traders. The key intuition is that all arbitrageurs can submit trades on the batch. Hence, there is no equilibrium in which there is an exploitable arbitrage opportunity; otherwise, some arbitrageurs will want to submit additional trades on the batch. Arbitrage opportunities are absent whenever p̃(A +y(p^*), τ) = p^*, which is the unique equilibrium. The revenues earned in fees are contributed to the FM-AMM liquidity pools. It follows that the FM-AMM earns the effective price on every trade. Knowing this, we can easily derive the evolution of the liquidity pools: If A+y(p^*) ≠ 0, then the liquidity pools after the rebalancing are Q^E-(A+y(p^*)) and Q^$+p^* · (A+y(p^*)). Otherwise, the liquidity pools are again Q^E and Q^$. The key observation is that the FM-AMM always trades at the equilibrium price, even though its trading function does not depend on such price. Again, this is a consequence of the competition between arbitrageurs, who will add trades to the batch until the FM-AMM effective price equals the equilibrium price. Note also that, even if the price at which the FM-AMM trades is always p^* and is independent of the fee, the probability of trading and the trade size depends on the fee. More precisely, higher fees decrease the probability of a rebalancing event but increase the trade needed to rebalance the FM-AMM (i.e., it increases the equilibrium A+y(p^*)). This is because higher fees imply that the effective price is less elastic to the trade size, and hence larger trades are required to move the effective price to the new equilibrium. To conclude, we discuss how the FM-AMM performs in the presence of risk. Consider the end of an odd period. At that point in time, the future equilibrium price p^* is a random variable with E[p^*]=p^*_0. In a traditional CFAMM, we know from the literature that arbitrage profits increase in the volatility of the price (see , and ), which implies that the expected value of future LP holdings is lower when future prices are more volatile. The previous corollary shows that CF-AMM trades at the fair equilibrium price at every realization of p^*, and the expected value of its liquidity pools is p_0 Q^$ + Q^E. Hence, with respect to the value of the pools, CFAMMs are risk averse while FM-AMMs are risk neutral. At the same time, we discussed earlier how a CFAMM always trades to stay on the same function, while an FM-AMM trades to increase the value of its function. The next proposition shows that this increase is larger the more risk there is. Hence, with respect to the value of the function, a CFAMM is risk neutral (i.e. it always stays on the same function), while an FM-AMM is risk-loving. Consider two probability distributions of the equilibrium price, F(p):R^+→[0,1] and G(p):R^+→[0,1] having equal mean p^*_0. Assume that F() is a mean-preserving spread of G(), that is, it is possible to write p^*_f = p^*_g + ϵ where p^*_f ∼ F(), p^*_g ∼ G() and ϵ is a shock with E[ϵ |p^*_g]=0. Then, in expectation, the FM-AMM reaches a higher function under distribution F() than under distribution G(), that is E_F [U(A+y(p^*), p^*, τ)] ≥ E_G [U(A+ y(p^*), p^*,τ)] The inequality is strict if the probability of a rebalancing under distribution F() is strictly positive. The proposition compares the expected value of the function under two distributions of the future price, where one distribution is a mean-preserving spread of the other. This ranking of distributions captures an intuitive notion of risk because one distribution can be derived from the other by adding some noise. If one distribution is a mean-preserving spread more than the other, the first distribution has a higher variance. Note, however, that not all distributions can be ranked using mean-preserving spreads. It is, however, usually the case that if both distributions belong to the same family (i.e., both normal), then ranking based on mean-preserving spreads coincides with the ranking based on variance. §.§ Rebalancing in the presence of fees Description of how the CPAMM and the FM-AMM are rebalanced in the presence of fees: Both have reserves (Q_D, Q_E), and are rebalanced to a new external price p > Q_E/Q_D. CF-AMM: The arbitrageur will pay y DAI for x ETH, such that the CPAMM accepts the trade and the arbitrageur pays a price of p for the trade (taking fees into account): y(1-f)/x = p^FM-AMM(x) = Q^$/Q^E-2x y/x = p Solving this yields x = 1/2( Q^E-Q^$/p(1-f)) y = 1/2( Q^E p - Q^$/1-f). After rebalancing, the reserves are: {1/2(Q^E p + (2-1/1-f)Q^$), 1/2(Q^E+Q^$/p(1-f)) } [inline]this probably isn't correct yet CPAMM: The arbitrageur will pay y DAI for x ETH, such that the CPAMM accepts the trade and the marginal price after the swap is p(1-f):[inline]This is not entirely true and since there are fees, the first arbitrageur will. And since no longer have path-independence, this makes a difference. However, this could get very ugly, and the differences should be small. y(1-f)/x = p^CPAMM(x) = Q^$/Q^E-x Q^$+y(1-f)/Q^E-x = p(1-f) Solving this yields, x = Q^E - √(Q^$Q^E/p(1-f)) y = 1/1-f(√(Q^$Q^E p(1-f))-Q^$) After rebalancing, the reserves are: ( 1/1-f(√(Q^$Q^E p(1-f))) +(1-1/1-f)Q^$, √(Q^$Q^E/p(1-f))) § EMPIRICAL ANALYSIS We complement our theoretical analysis by estimating the returns of providing liquidity to an FM-AMM. We do so by considering a counterfactual in which an FM-AMM existed during a specific period. We use Binance price data (together with our theoretical results) to simulate how arbitrageurs would have rebalanced our simulated FM-AMM. Importantly, because we consider an FM-AMM with a zero fee for inclusion in a batch, our FM-AMM generates no fees from noise traders. (Equivalently, we could also assume that the FM-AMM does not receive any noise trading volume and is only rebalanced by arbitrageurs.) Hence, the estimated LP returns should be considered a lower bound to the possible returns generated by an FM-AMM that also earns revenues from noise traders. We then compare the return of providing liquidity to our simulated FM-AMM to the empirical returns of providing liquidity to the corresponding Uniswap v3 pool. If the Uniswap v3 pool and the FM-AMM rebalance at the same frequency and has the same fee, then the comparison between these returns and the simulated FM-AMM returns establishes whether, on the Uniswap v3 pool we consider, arbitrageurs' profits exceed or fall short of the revenues generated by noise traders.[<cit.> perform a similar analysis: they consider a continuous-time model in which arbitrageurs pay no fee and derive a formula for the return of providing liquidity to Uniswap. They then use their formula to study empirically whether fees from noise traders exceed or fall short of arbitrageurs' profits on the ETH-USDC Uniswap v2 pool. The main difference with our analysis is that the theoretical model we use for our empirical decomposition is already in discrete time and has fees (in this sense, it is related to <cit.>). In any case, <cit.> find that the losses to arbitrageurs are smaller than the revenues earned from noise traders, which is consistent with our results. ] We then consider how the return of FM-AMM LP changes with its rebalancing interval and fee. §.§ Details of the empirical analysis We retrieve price data from Binance from October 2022 to March 2023 for several trading pairs. We then use the result of Proposition <ref> to simulate an FM-AMM pool rebalanced to the Binance price in regular intervals of different frequencies (namely once every 12s = 1 block, 1 min, 5 min, 1h, and 1 day). These rebalancing trades determine the evolution of the FM-AMM pools and the return of its liquidity providers. We repeat the same exercise for different fees charged by the FM-AMM, namely 0.0%, 0.05%, 0.3%, and 1.0%. To calculate the return on a liquidity position in a Uniswap v3 pool, we consider three pools for which we also have Binance price data: WETH-USDT (with fee 0.05%), WBTC-USDT (with fee 0.3%), and WBTC-WETH (with fee 0.05%).[Note that, whereas most Binance prices are expressed in USDT, this stablecoin is not widely used in Uniswap v3: at the time of writing, if we exclude stablecoin-to-stablecoin pairs, the two pools we consider are the only pools with USDT in the top 30 Uniswap v3 pools.] In each case, we simulate the return of a small full-range position. We then query the amount of fees the pool earned in a given block and the amount of liquidity that is “in range” at the end of the same block. [We use the Uniswap v3 subgraph to query the data. <https://thegraph.com/hosted-service/subgraph/uniswap/uniswap-v3>] We then use the size of the simulated liquidity position in range to calculate the fees it earns. Our method is based on two assumptions. First, we assume that the simulated liquidity position is too small to affect price slippage, the volume of trades, and the incentive to provide liquidity by other LPs. We also implicitly assume that the liquidity in the range is constant during a block. This last assumption introduces some non-systematic inaccuracies in our estimation. For example, if within a block, first some fees are collected and then some additional liquidity is introduced, our method attributes a fraction of these fees to the new liquidity even if, in reality, it did not earn any. If, instead, first some fees are collected and then some liquidity is withdrawn, our method does not attribute any of the fees to the liquidity that was withdrawn, while in reality it did earn some fees. Similarly, if a price changes tick, our method shares all fees collected in a block among the liquidity available in the last tick, whereas, in reality, some of the fees are shared among the liquidity available at the initial tick.[Besides being non-systematic, the inaccuracies introduced are likely to be extremely small. For the ETH-USDT pool, we calculate that the difference between assuming liquidity to be constant over two blocks instead of over one block is 0.0002% over 6 months. We expect the inaccuracies from assuming the liquidity to be constant over one block to be of the same magnitude.] Finally, our results do not depend on the size of the initial liquidity position. On Uniswap v3, a larger initial position earns proportionally more fees, but its ROI is the same. Similarly, on an FM-AMM, the size of the rebalancing trade scales proportionally with the available liquidity so that, again, its ROI is independent of its initial size. Also, for both Uniswap v3 and the FM-AMM, we consider a liquidity position that is non-concentrated (i.e., a position over the entire price range [0, ∞]). If both positions are concentrated in the same (symmetrical) way, both Uniswap v3 fees and FM-AMM returns increase by the same factor as long as the price does not go out of range. So the comparison does not change, and the full-range comparison already constitutes a general comparison. §.§ Results Arbitrageurs' profits vs. Uniswap fees Figure <ref> shows the evolution of the simulated liquidity position on three different Uniswap v3 pools over 6 months (October 2022 - March 2023)[More precisely, between blocks 15,648,998 and 16,950,010.], together with the hypothetical return of providing the same liquidity to an FM-AMM with fee equal to the fee of the Uniswap v3 pool we are comparing to, and rebalancing every block. Note that for ETH-USDT and BTC-USDT, the numeraire is USDT, while for the ETH-BTC, the numeraire is BTC. Moreover, note that the returns of both the FM-AMM and Uniswap v3 are plotted relative to the value of the liquidity position (i.e. a zero-fee Uniswap v3 liquidity position). The results are mixed: for ETH-BTC, arbitrage profits are larger than Uniswap v3 fees over the whole period; hence contributing liquidity to our simulated FM-AMM would have generated larger returns. For ETH-USDT the result is reversed: fees on Uniswap v3 compensate for the loss to arbitrageurs, and contributing liquidity to our simulated FM-AMM would have generated lower returns. For BTC-USDT, the two are about the same. A comparison of monthly returns can also be found in Table <ref>. Note that the table shows relative returns, i.e., the changes in the value of the liquidity position relative to holding the starting value of the position fully in the numeraire outside an AMM. Again, which AMM provides a higher ROI depends on the pair and the period. We conclude by arguing that the difference in returns between the two AMMs is small. The difference in the total return at the end of the 6 months we consider is: -0.22% (for the ETH-USDT pair), 0.03% (for the BTC-USDT pair) and 0.11% (for the ETH-BTC pair). Furthermore, by looking at the evolution of the two liquidity positions (as in Figure <ref>), we calculate the maximum difference in value between the two liquidity positions (expressed in percentage of the initial liquidity position). This measure can be interpreted as a “relative maximum drawdown” and is 0.30% (for the ETH-USDT pair), 0.14% (for the BTC-USDT pair) and 0.12% (for the ETH-BTC pair). We interpret these results as showing that the simulated FM-AMM and Uniswap v3 generate similar returns. FM-AMM rebalancing frequency and fees The return on the simulated FM-AMM is the lower bound to the possible return of an FM-AMM for several reasons. As already mentioned, the simulated FM-AMM does not earn revenues from noise traders by design. In addition, the simulated FM-AMM has the same fee and rebalancing interval as the corresponding Uniswap pool. Here we explore whether choosing a different fee or rebalancing interval would have generated higher returns for FM-AMM LPs. We start with the rebalancing frequency. In Figure <ref>, we report two examples, one in which the fastest rebalancing interval (1 block) generates the highest return and one in which this is not the case. In Appendix <ref> we repeat the exercise for other token pairs. In most cases, the shortest rebalancing interval generates the largest returns, although for some pairs this is not the case. For example, for the pair ETH-BTC, a rebalancing period of 25 blocks (5 mins) would have generated the highest returns. Concerning the fee, remember that its value affects whether arbitrageurs rebalance the pool, with higher fees implying a lower probability that the pools will be rebalanced. Furthermore, the fee also affects the size of the rebalancing trade, with higher fees implying larger rebalancing trades (which always occur at the new equilibrium price). Figure <ref> shows two examples, one in which a fee of zero is optimal and another in which a strictly positive fee is optimal (see Appendix <ref> for additional token pairs). In line with the previous result, we find that, in most cases, the optimal choice is a zero fee, which implies more frequent rebalancing. Again, there are exceptions, though, where LP returns on an FM-AMM would have been highest for a larger-than-zero fee. § CONCLUSION In this paper, we study the design of AMM when trades are batched. Such an AMM does not need to satisfy path independence, which implies that the design space is larger than without batching. In particular, it is possible to design a function maximizing AMM (FM-AMM) which, for given prices, always trades to be on the highest possible value of a given function. At the same time, batching creates competition between informed arbitrageurs. As a result of this competition, the price at which the FM-AMM trades is always equal to the equilibrium price (assumed determined on some very liquid location and exogenous to the FM-AMM). Hence, whereas in a traditional CFAMM, arbitrageurs earn profits at the expense of liquidity providers, in an FM-AMM these profits remain with the LPs. At the same time, sandwich attacks are eliminated because all trades within the same batch occur at the same price, equal to the exogenously-determined equilibrium price. We empirically estimate a lower bound to the return of providing liquidity to an FM-AMM and show that, at least for the token pairs and the period we consider, such a lower bound is very close to the empirical returns of providing liquidity on Uniswap v3. A final observation is that an FM-AMM eliminates adverse selection by creating competition among multiple informed arbitrageurs. However, if there is a single, informed trader, then this trader can still enjoy information rents and trade at an advantage against FM-AMM LPs. A partial remedy is to set appropriate trading fees. Studying this problem is left for future work. § MATHEMATICAL DERIVATIONS To start, note that if either p^* > Q^$/(1-τ)Q^E or p^* < (1-τ) Q^$/Q^E, then for y(p^*) such that p̃(A +y(p^*), τ)=p^* we have A+y(p^*) ≠ 0. Therefore, there is trade settled on the FM-AMM, and the price at which all traders trade is uniquely determined. If instead p^*∈ [Q^$/(1-τ)Q^E, (1-τ) Q^$/Q^E], then p^* can be the equilibrium price on the FM-AMM only if A+y(p^*) =0, that is, no trade reaches the FM-AMM. In this case, p̃(0, τ)=p^* is one of the possible equilibrium prices from the FM-AMM's viewpoint. First, suppose that p^* > Q^$/(1-τ)Q^E or p^* < (1-τ) Q^$/Q^E, so that for y(p^*) such that p̃(A +y(p^*), τ)=p^* we have A+y(p^*) ≠ 0. The fact that y(p^*) is the unique equilibrium is easily established by contradiction: suppose the equilibrium is y' with p̃(A +y', τ)≠ p^*, and that x≠ A+y'. Then by the fact that p̃(A +y', τ) is locally continuous, an arbitrageur could submit an additional trade z such that p̃(A +y' +z, τ) ≠ p^* and earn strictly positive profits, which implies that y' is not an equilibrium. It is also easy to establish that y such that p̃(A +y(p^*), τ)=p^* is an equilibrium, as no arbitrageur has any incentive to deviate. Suppose now that p^*∈ [Q^$/(1-τ)Q^E, (1-τ) Q^$/Q^E]. Also, here, it is easy to see that the only equilibrium is y=-A and p̃(0, τ)=p^*, because this is the only case in which there are no arbitrage opportunities to exploit, and hence there is no excess demand or supply from arbitrageurs. Remember that, for given prices, a positive-fee FM-AMM trades to maximize the objective function U(x,p, τ), defined in (<ref>). The first observation is that the value function V(p, τ)=max_x U(x,p, τ) is convex in p, strictly so if there is strictly positive trade (i.e., if x(p, τ) ≡_x U(x,p, τ) ≠ 0). To see this, use the envelope theorem to write: ∂ V(p, τ)/∂ p = (Q^E - x(p, τ)) x(p, τ) x(p, τ)>0 (Q^E/1-τ -x(p, τ)) x(p, τ) x(p, τ)<0 0 x(p, τ)=0 so that ∂^2 V(p, τ)/∂ p^2 = ∂ x(p,τ) /∂ p(Q^E - 2 x(p, τ)) x(p, τ)>0 ∂ x(p,τ) /∂ p(Q^E/1-τ - 2 x(p, τ)) x(p, τ)<0 0 x(p, τ)=0 Because p and x are strict complements in the objective function, by Topkis's theorem, ∂ x(p,τ) /∂ p>0 whenever x(p, τ) ≠ 0. Finally, the FM-AMM always trades so that (Q^E - 2 x(p, τ)) >0 and (Q^E/1-τ - 2 x(p, τ))>0. It follows that V(p,τ) is strictly convex in p whenever x(p, τ) ≠ 0. Because F() is a mean-preserving spread of G(), then E_F [V(p, τ)] ≥ E_G [V(p,τ)] with strict inequality as long as x(p,τ)>0 for some realization of p. § EXTRA FIGURES chicago
http://arxiv.org/abs/2307.02840v1
20230706081405
Sampling efficiency of transverse forces in dense liquids
[ "Federico Ghimenti", "Ludovic Berthier", "Grzegorz Szamel", "Frédéric van Wijland" ]
cond-mat.soft
[ "cond-mat.soft", "cond-mat.dis-nn", "cond-mat.stat-mech" ]
Laboratoire Matière et Systèmes Complexes (MSC), Université Paris Cité & CNRS (UMR 7057), 75013 Paris, France Laboratoire Charles Coulomb (L2C), Université de Montpellier & CNRS (UMR 5221), 34095 Montpellier, France Yusuf Hamied Department of Chemistry, University of Cambridge, Lensfield Road, Cambridge CB2 1EW, United Kingdom Department of Chemistry, Colorado State University, Fort Collins, Colorado 80523, United States of America Laboratoire Matière et Systèmes Complexes (MSC), Université Paris Cité & CNRS (UMR 7057), 75013 Paris, France Sampling the Boltzmann distribution using forces that violate detailed balance can be faster than with the equilibrium evolution, but the acceleration depends on the nature of the nonequilibrium drive and the physical situation. Here, we study the efficiency of forces transverse to energy gradients in dense liquids through a combination of techniques: Brownian dynamics simulations, exact infinite-dimensional calculation and a mode-coupling approximation. We find that the sampling speedup varies non-monotonically with temperature, and decreases as the system becomes more glassy. We characterize the interplay between the distance to equilibrium and the efficiency of transverse forces by means of odd transport coefficients. Sampling efficiency of transverse forces in dense liquids Frédéric van Wijland August 1, 2023 ========================================================= To sample a given target distribution, the paradigm is to construct Markov processes endowed with detailed balance. When the physics slows the dynamics down, as for instance in the vicinity of a critical point, or in disordered and dense systems, algorithms that can increase the sampling efficiency are much needed <cit.>. Sampling by violating detailed balance using nonequilibrium dynamics is a possible route, explored in an applied mathematics literature dating back to the mid-nineties <cit.>. Potential applications are not limited to physical systems, since, for instance, slow dynamics caused by a complex non-convex energy landscape are also encountered in machine learning and neural networks <cit.>. Bounds and inequalities on the convergence or mixing rates have been obtained <cit.>, and studies encompass the mean-field Ising model <cit.> and systems evolving via diffusive hydrodynamics <cit.>. This is a very active field of applied mathematics <cit.> and of computer science <cit.>. Numerical studies also exist for a variety of systems <cit.>, but no quantitative results exist for systems with self-induced disorder, such as glassy liquids. In the latter case, nonequilibrium forces can either shift <cit.> or destroy <cit.> the glass transition, while the addition of unphysical degrees of freedom was recently shown <cit.> to drastically change the relaxation dynamics. We explore how nonequilibrium methods that sample the Boltzmann distribution fare when applied to a strongly-interacting classical many-body system, such as a high-density or low-temperature fluid exhibiting glassy dynamics, and to determine the dependence of the acceleration on the state point. The specific dynamics we study is the overdamped Langevin dynamics driven out of equilibrium by a force field transverse to the local energy gradient. Our results are established using a combination of techniques, ranging from the numerical integration of Langevin equations for a Kob-Andersen mixture, through mean-field infinite dimensional calculation to finite-dimension mode-coupling approximation. Our presentation goes along these three axes, each of which sheds its own light on the questions we ask. We demonstrate the existence of an optimal temperature for the acceleration. While a gain remains, the efficiency decreases as the glass transition is approached. Transverse forces also lead to the appearance of odd transport coefficients, that were earlier found in active matter systems composed of chiral particles or driven by nonreciprocal forces <cit.> (see also <cit.> for a dilute equilibrium fluid). Surprisingly, odd diffusivity is insensitive to the emergence of glassy behavior. Our approach is illustrated with the example of a single particle with position in an external potential V() at temperature T=β^-1, / t=-μ(1+γ𝐀)_ V+√(2μ T) , where μ is the mobility. The components of the Gaussian white noise are independent, ⟨η_i(t)η_j(t')⟩=δ_ijδ(t-t'). When the matrix 𝐀 is skew-symmetric, the nonequilibrium force of strength γ is transverse to the energy gradient. The stationary distribution thus retain its Boltzmann form, ρ_B()=^-β V() / Z even when γ>0, but the entropy production rate is finite, τ_Σ^-1= βγ^2⟨ (𝐀_ V)^2⟩_B. Another relevant time scale governing microscopic dynamics is given by the reciprocal of the average escape rate <cit.>. Its equilibrium expression is τ_0^-1∼⟨β(∂_ V)^2⟩_B but with a nonzero γ this becomes τ_0/1+γ^2. This elementary reasoning would suggest that transverse forces simply result in a global rescaling of time scales. We show below that many-body interactions lead to a very different picture. To widen the scope of our statements, we highlight a correspondence between transverse forces, as defined in Eq. (<ref>), and the lifting procedure <cit.>, which is an alternative approach to accelerate the dynamics. In a nutshell, lifting amounts to augmenting the degrees of freedom of a system with equilibrium dynamics by a set of auxiliary (and unphysical) variables that produce nonequilibrium flows while preserving the original equilibrium distribution. One way to see the connection with transverse forces is to consider, following <cit.>, two equilibrium systems with potentials V_1(_1) and V_2(_2) evolving through the coupled dynamics _1/ t=-__1V_1+γ__2 V_2+√(2 T)_1, _2/ t=-__2V_2-γ__1 V_1+√(2 T)_2. In this case, the nonequilibrium forces are transverse in the extended (_1,_2) space and the stationary distribution decouples into ρ_B(_1,_2)∝^-β V_1(_1)^-β V_2(_2). If we choose for system 2 a quadratic potential, the equation of motion for 1 resembles that of an active Ornstein-Uhlenbeck particle <cit.> where the role of the self-propulsion velocity is played by _2. The equation of motion for 2 is however different from the Ornstein-Uhlenbeck equation to ensure that 1 samples the Boltzmann distribution: the dynamics of system 1 is lifted by that of system 2. A cartoon of the connection between transverse forces and lifting is shown in Fig. <ref>. While our derivations start from transverse forces, our conclusions therefore extend at least to locally lifted systems. The general many-body problem considered is a system with i=1,…,N particles in d dimensions evolving under the influence of interparticle forces F_i=-( 1+γ A) ∑_j≠ i__i V(_i-_j), where 𝐀 is a skew-symmetric matrix, and V( r) is a pair potential, evolving as in Eq. (<ref>) with thermal noise. The strength of the nonequilibrium forces is controlled by γ which means we keep the matrix elements of 𝐀 or order 1 (and independent of γ). The steady state distribution is again the Boltzmann distribution ρ_B∝ e^-β∑_i<jV(_i - _j). We first report the results of numerical simulations of a three-dimensional binary Kob-Andersen mixture <cit.> of N_A=800 particles of type A and N_B=200 of type B interacting as V_αβ(r)=4_αβ[(σ_αβ/r)^12-(σ_αβ/r)^6], r≤ 2.5σ_αβ with α,β∈{A,B} and where _AA=1, _AB=1.5, _BB=0.5, σ_AA=1, σ_AB=0.8, σ_BB=0.88. The linear size of the system is 9.4σ_AA and periodic boundary conditions were used. We choose 𝐀 in a block diagonal form with ± 1 elements in the xy plane, without loss of generality. We integrate the equations of motion using a Euler-Heun algorithm with a discretization step calibrated to optimize efficiency while still properly sampling equilibrium properties <cit.>. We first show in Fig. <ref>(a) that the static structure is unaffected by the introduction of transverse forces, demonstrating equilibrium sampling with nonequilibrium dynamics, over a temperature range encompassing a high-temperature almost structureless fluid down to a mildly supercoooled liquid. To estimate the speedup of the sampling, we use the mean-squared displacement Δ r^2(t) for particles A. Its temperature evolution is shown in Fig. <ref>(b) at equilibrium, which displays the development of a two-step glassy dynamics below the onset temperature near T ≈ 1.0. In Fig. <ref>(c), we demonstrate that the introduction of transverse forces accelerates the dynamics of the system. To quantify this acceleration, we extract the diffusion constant, D(γ,T), from the long-time limit of the mean-squared displacements, see Fig. <ref>(a). At fixed γ, there exists a temperature near T^* ≈ 100 that maximizes the increase of the diffusion constant. We suggest that at high temperatures, interactions (including chiral ones) are smeared out by thermal noise which degrades the efficiency. When the supercooled regime is entered more deeply, the degree of acceleration also decreases, because at low temperatures particles spin along circular trajectories within their local cages, see Fig. <ref>. This local motion has a modest influence on the long-time dynamics. To better understand the origin of the acceleration, we measure the odd diffusivity of the particles A, which can be calculated from a Green-Kubo expression <cit.> D_⊥=1/2N_A∑_i=1^N_A∫_0^+∞ t⟨ẏ_̇i̇(t)ẋ_̇i̇(0)-ẋ_̇i̇(t)ẏ_̇i̇(0)⟩ . By symmetry, D_⊥ vanishes for an equilibrium dynamics, and its value usefully quantifies the circular motion shown in Fig. <ref>(d). The temperature dependence of D_⊥ is shown in Fig. <ref>(a) for γ=8. Its magnitude increases with γ, but at fixed γ it rises steeply from 0 to a finite value near T^*. As the system enters its slow dynamical regime, D_⊥ settles to a finite value, including in the arrested glass phase, where it is presumably dominated by the in-cage circular motion created by the transverse forces. This behaviour constrasts with the translational diffusion coefficient which changes by orders of magnitude in the supercooled liquid, and vanishes in the glass. Overall the simulations reveal a non-monotonic temperature dependence of the sampling efficiency of transverse forces, which decreases when temperature is lowered, accompanied by odd-diffusivity which appears insensitive to this evolution. To understand these non-trivial findings, we turn to two analytical approaches, focusing for simplicity on a monodisperse fluid. First, we consider the mean-field limit which is achieved, for simple fluids, by increasing the space dimension d to infinity <cit.>, while keeping the number of neighbors per space direction of order unity. In this limit, one can derive an effective Langevin equation for one spatial component of the position of a tagged particle with an effective noise that originates from the remaining components and the coordinates of all other particles. We defer technical details to <cit.>. Even out of equilibrium <cit.>, the influence of the bath appears as a sum of a position-independent friction kernel and a noise. The friction kernel and the noise autocorrelation need to be determined self-consistently, in a typical mean-field procedure. For our problem, we find that the position _i of tagged particle i evolves according to _i(t)/ t=-β∫ t' (1+γ A)𝐌(t-t')_i(t')/ t'+Ξ_i(t), where 𝐌 is a d× d memory kernel to be determined, and Ξ_i is a zero-average Gaussian noise with correlations ⟨Ξ_i(t)⊗Ξ_j(t')⟩= δ_ij[2T 1δ(t-t') +(1+γ A)𝐌(t-t')(1-γ A)]. The memory kernel 𝐌 is given by correlations of the pair-potential gradients, 𝐌(t)=∑_j ⟨_i V(_ij(t))⊗_i V(_ij(0)⟩, where _ij=_i-_j. To determine 𝐌 we need to consider the evolution of the relative position =_ij, which can be shown <cit.> to follow 1/2/ t=-(1+γ𝐀)_ V-β/2∫ t' (1+γ A)𝐌(t-t')/ t'+Ξ(t) where the Gaussian noise Ξ has correlations ⟨Ξ(t)⊗Ξ(t')⟩=T δ(t-t')+1/2 (1+γ A)𝐌(t-t')(1-γ A). The procedure is to determine the statistics of as a functional of M, and then to determine the force statistics in the rhs of Eq. (<ref>) as a functional of M, hence obtaining a self-consistent functional equation for M. In practice, even for equilibrium dynamics, M can only be determined numerically <cit.>. To evaluate the diffusion constant we only need the time integral of the kernel which becomes diagonal, M=1M. The diffusion constant is expresssed in terms of its time integral M(γ, T)=∫_0^+∞M(t) t as D(γ,T)=T1+(1+γ^2)βM/(1+βM)^2+(βγM)^2. This result is obtained in even space dimension for a matrix A <cit.> made of d/2 identical 2× 2 blocks with ± 1 entries (nonidentical blocks would require averaging over the blocks, without affecting our conclusions; working in an odd space dimension would involve a single extra space dimension with negligible effect as d≫ 1). In the ergodic phase, one can show that the γ-dependent relaxation time of M scales as τ(γ)=M∝γ^-1 when γ≫ 1. The diffusion constant D(γ,T) therefore behaves as D ∼γ for large γ. Note also that when M(t) does not relax to 0 then M=+∞ and D(γ,T) vanishes. The stationary distribution of the relative position allows one to determine the parameters for which M = +∞, A more careful study of Eq. (<ref>) shows that, in spite of the nonequilibrium nature of the dynamics, the stationary distribution of remains the equilibrium one. While this is expected for the statics, it is much less trivial when considering the dynamical evolution. This implies that the dynamical transition temperature at nonzero γ is unaffected by the transverse forces. Thus, the constraint of Boltzmann sampling is so strong that transverse forces cannot prevent the emergence of diverging free energy barriers leading to ergodicity breaking, in contrast with active forces <cit.> or shear flows <cit.>. Looking at Eq. (<ref>) we see that D is a nonmonotonic function of M. If one assumes that M is a decreasing function of temperature then D too becomes non monotonous as a function of temperature. Since the maximum of D occurs at high temperature, it makes sense to evaluate Eq. (<ref>) using a low density approximation for M. It turns out that equilibrium expression of M obtained in <cit.> still holds at nonzero γ, and it produces an evolution of D consistent with Fig. <ref>(a). We also obtain the odd diffusivity, given by D_⊥(γ,T)=-γ T βM+(1+γ^2)(βM)^2/(1+ βM)^2+(γβM)^2 , and which behaves as D_⊥∼γ at large γ. We also see that D_⊥ remains finite below the dynamical transition temperature, when M→∞. In the mean-field limit, a genuine glass phase appears at low temperature in which particles are trapped in a local harmonic environment created by their neighbors. In a harmonic well the spectrum of the Fokker-Planck operator only picks up an imaginary part when transverse forces are applied, leaving the real part of the eigenvalues unchanged, thereby capturing the emergence of circular orbits within the well. The physical picture is that chiral forces eventually lose their accelerating power by wasting the injected energy into circular trajectories. It is unclear whether these mean-field results are valid in finite dimensions. We thus resort to an approximate theory in the spirit of the mode-coupling theory of glassy dynamics <cit.>. To compare with the infinite-dimensional calculation we focus on the self-part of the intermediate scattering function, F_s(,t)=1/N∑_j⟨ e^i·[_j(t) - _j(0)]⟩. The long wavelength limit of F_s(;t) is related to the mean-squared displacement, F_s(→ 0,t) = 1 - q^2/6Δ r^2(t), and therefore the long time dynamics of F_s at large wavelength allows us to obtain the diffusion constant. The standard mode-coupling approximation applies to equilibrium dynamics, though recent inroads <cit.> pave the way for nonequilibrium extensions. The main technical difficulty in our case is the presence of transverse currents, which come in addition to the usual longitudinal ones. Within our own mode-coupling approximation for transverse forces, we obtain the memory kernels (M_∥(,t), M_⊥(,t)) encoding respectively longitudinal and transverse current-current correlations. The evolution of F_s(,t) is given by <cit.> _t F_s+Tq^2 F_s+ β(1+γ^2)M_⊥*F_s = -[β (M_∥ + M_⊥)+ β^2(1+γ^2)[M_⊥* M_∥] ]*_t F_s , where * denotes a time-convolution. The functional expression of M_∥ is the same as in equilibrium <cit.>, while M_⊥=T^2ρ_0 ∫d/(2π)^3( ·/||)^2c(k)^2 F_s(-,t)S(,t) , with ρ_0 is the number density, S(,t) the collective intermediate scattering function and ρ_0 c(k) ≡ 1 - 1/S(k). The same matrix 𝐀 as in our numerics is used. To close Eq. (<ref>) we need an equation of motion for S(;t). This equation, discussed in detail in <cit.>, also predicts that the location of the mode-coupling transition is not influenced by the transverse currents, thus confirming the infinite-dimensional results. The zero-frequency mode of the memory kernel M_α,i=lim_q→ 0∫_0^+∞M_α(q_i,t) t controls the behavior of the diffusion constants, D_∥, x = T1 + (1+γ^2)βM_⊥,x/(1 + βM_∥,x)(1 + βM_⊥,x) + γ^2β^2M_∥,xM_⊥,x , D_∥, z = T/1+βM_∥,z, with D_∥,y=D_∥,x. We note two consequences of working in finite dimension: M_⊥≠ M_∥ and D_∥,z≠ D_∥,x. The expressions for odd diffusion constants are similar to Eq. (<ref>). Assuming that the system falls into a nonergodic regime below some transition temperature T_ MCT, the memory kernels M_⊥(t) and M_∥(t) also saturate at a nonzero value at long times, and the longitudinal diffusion constants vanish. Whereas the location of the ergodicity breaking transition is independent of γ, the dynamics in the ergodic phase is not. In particular, assuming that M_∥,i does not exceed its equilibrium counterpart, one can show that the longitudinal diffusion constants for γ≠ 0 are always larger than their equilibrium counterpart. If the diffusion constant is larger, the relaxation of F_s is faster, and thus the value of the zero-frequency limit of the kernels is reduced, self-consistently demonstrating acceleration of the dynamics for γ >0. Remarkably, Eq. (<ref>) shows that the diffusion (quantified by D_∥, z) along the z-direction is also indirectly accelerated by the coupling with the other directions. In conclusion, we found that the acceleration provided by transverse forces in a dense interacting system strongly depends on temperature, which comes as a surprise. The acceleration departs from a simple rescaling of the time, due to both interactions and emerging glassiness, which also lead to non-trivial asymptotic scaling with γ. Transverse forces begin to operate when the relaxation time of the system exceeds τ_Σ∼τ_0/γ^2, but their efficiency decreases in deeply supercooled states leading instead to circular trajectories but only modest acceleration. This picture is corroborated by the behavior of the odd diffusivity, which is small as long as τ_Σ exceeds the relaxation rate of the system, but saturates to a finite value as the glass phase is approached. Our study resorts to a very local, and somewhat uninformed, way of driving the system out of equilibrium. In the more elaborate methods implemented in <cit.>, spatially extended and correlated moves are performed. It is a stimulating open question to find out how, when pushed towards glassiness, these methods compare with the minimal ones investigated here. LB, FG and FvW acknowledge the financial support of the ANR THEMA AAPG2020 grant, along with several discussions with M. Michel, A. Guillin and W. Krauth. GS acknowledges the support of NSF Grant No. CHE 2154241.
http://arxiv.org/abs/2307.02210v2
20230705113345
Modelling the formation and evolution of solar wind microstreams: from coronal plumes to propagating Alfvénic velocity spikes
[ "Bahaeddine Gannouni", "Victor Réville", "Alexis Rouillard" ]
astro-ph.SR
[ "astro-ph.SR", "physics.plasm-ph", "physics.space-ph" ]
Å 0000-0002-1711-1802]Bahaeddine Gannouni IRAP, Université Toulouse III - Paul Sabatier, CNRS, CNES, Toulouse, France 0000-0002-2916-3837]Victor Réville IRAP, Université Toulouse III - Paul Sabatier, CNRS, CNES, Toulouse, France 0000-0003-4039-5767]Alexis P. Rouillard IRAP, Université Toulouse III - Paul Sabatier, CNRS, CNES, Toulouse, France We investigate the origin of mesoscale structures in the solar wind called microstreams defined as enhancements in solar wind speed and temperature that last several hours. They were first clearly detected in Helios and Ulysses solar wind data and are now omnipresent in the ‘young’ solar wind measured by Parker Solar Probe and Solar Orbiter. These recent data reveal that microstreams transport a profusion of Alfvénic perturbations in the form of velocity spikes and magnetic switchbacks. In this study we use a very high-resolution 2.5 MHD model of the corona and the solar wind to simulate the emergence of magnetic bipoles interacting with the pre-existing ambient corona and the creation of jets that become microstreams propagating in the solar wind. Our high-resolution simulations reach sufficiently high Lundquist numbers to capture the tearing mode instability that develops in the reconnection region and produces plasmoïds released with the jet into the solar wind. Our domain runs from the lower corona to 20 Rs, this allows us to track the formation process of plasmoïds and their evolution into Alfvénic velocity spikes. We obtain perturbed solar wind flows lasting several hours with velocity spikes occurring at characteristic periodicities of about 19 minutes. We retrieve several properties of microstreams measured in the pristine solar wind by Parker Solar Probe, namely an increase in wind velocity of about 100 km/s during the stream’s passage together with superposed velocity spikes of also about 100 km/s released into the solar wind. § INTRODUCTION White-light images of total solar eclipses and coronagraphs reveal fine ray-like structures emanating from polar coronal holes. These ‘plumes’ extend outward from the base of the corona and are observed in white-light and extreme-ultraviolet images. They are most commonly found in polar coronal holes, but can also be observed in equatorial coronal holes <cit.>. Plasma and magnetic field data obtained by the two Helios solar probes showed that the scales of these rays are preserved in the evolving interplanetary high-speed solar wind measured close to the Sun. Further studies based on in situ measurements made in the polar solar wind by the Ulysses mission confirmed this result and identified ‘microstreams’ in the form of velocity fluctuations of ±40 km s^-1, higher kinetic temperatures, slightly higher proton fluxes <cit.>. <cit.> showed that X-ray jets are precursors of polar plumes and in some cases cause brightenings of plumes. Microstreams could therefore be the interplanetary manifestation of X-ray jets released during the formation of a plume inside a coronal hole <cit.>. The aim of the present study is to investigate, through high-resolution magneto-hydrodynamic simulations, the mechanisms driving the formation and evolution of plumes and microstreams and their dynamic properties discussed in the next paragraphs. Plumes are typically hazy and are routinely detected in the EUV wavelengths of 171 and 193 <cit.>. It has been debated as to whether coronal plumes or interplume regions may be the source regions of the fast solar wind <cit.>. Plumes appear to form after magnetic bipoles erupt in the open magnetic field of coronal holes. According to <cit.>, plumes extend away from photospheric flux concentrations and can last from hours to several weeks, reaching lengths of about 30 solar radii (R_⊙). Plumelets, which are small features within plumes, often exhibit intensity fluctuations on shorter time scales than the overall plume <cit.>. Data from STEREO/EUVI images show that these fluctuations, known as propagating disturbances, can have periods ranging from 5 to 30 minutes <cit.>. The formation of a plume is typically preceded by recurrent jets that emerge from random flux emergence and cancelation, and the plume itself goes through phases of brightening and decay, during which subplumes may be visible <cit.>. The emergence of magnetic flux in the dominant polarity of coronal holes plays an essential role in the heating, the outflow of plasma, and EUV brightening <cit.>. This is likely due to an interchange reconnection process, which takes place when emerging loop systems encounter an open background magnetic field <cit.>. It is also an efficient means of releasing plasma that is otherwise confined to closed field regions into the heliosphere and perhaps contributes to the mass flow of fast and slow solar winds emerging from coronal holes <cit.>. Interchange reconnection has also been one of the suggested mechanisms for the formation of magnetic switchbacks and velocity spikes measured ubiquitously by Parker Solar Probe (PSP) in the nascent solar wind <cit.>. Switchbacks are characterized by large amplitude Alfvénic fluctuations that propagate away from the Sun, with an extensive range of magnetic deflection angles from a few degrees to a full inversion <cit.>. The origins of these features are still under debate, particularly whether they are generated locally in the solar wind or in the lower corona. For instance, the work of <cit.> and <cit.>, suggests that switchbacks may be generated locally in the solar wind through processes involving velocity shears or turbulent flows. On the other hand, other studies, such as the work of <cit.> and <cit.>, propose that switchbacks are formed through interchange reconnection in the lower corona <cit.>. Switchbacks and velocity spikes come in bursts or patches whose spatial and time scales are comparable to those of microstreams <cit.>. These patches of disturbances are particularly intense in streamer flows but are also very clear in solar wind flows originating from deep inside coronal holes <cit.>. Statistical analysis of these patches of switchbacks/velocity spikes <cit.> as well as the analysis of solar wind composition <cit.> point towards an origin of these patches in sudden energy releases at the boundary of supergranules. Since photospheric transport processes force an accumulation of magnetic elements (loops and open fields) near the boundaries of granules and supergranules, interchange magnetic reconnection could occur frequently in these regions. Moreover, in the study of <cit.>, the analysis of PSP co-rotation periods revealed temporal signatures in addition of spatial structure associated with switchback patches. Disentangling spatial and temporal scales in the data and identifying the corresponding processes at the solar surface will be key to evaluating the idea that switchbacks have indeed a solar origin. Several studies have recently appeared in the literature that make the tentative association between microstreams and plumes and individual switchbacks with the jelets <cit.>. In this paper, we investigate the idea that interchange reconnection can arise from the emergence or cancellation of magnetic flux during emergence and contribute to the formation of coronal plumes. In particular, we wish to study the effect of the rate and amplitude of flux emergence on the formation of coronal plumes, interchange reconnection and the structure of the resulting jets. An association between plasmoïds formed during reconnection and switchbacks was proposed by <cit.> using kinetic simulations <cit.>. Recent advanced MHD simulations have also looked into the evolution of reconnection outflows that become in 2.5-D compressible Alfvén waves <cit.> and in 3-D torsional Alfvén waves <cit.> escaping the solar corona. In the present paper, we examine whether the magnetic islands produced through the tearing-mode instability in the reconnection layer could be the source of individual velocity spikes and switchbacks. In order to capture the development of the tearing-mode instability in the solar corona at the adequate time and spatial scales we limit our study to 2.5-D MHD but extend the domain out to 20 Rs. This allows us to simulate the lifetime of an entire microstream and the asssociated release of multiple microjets to reproduce the form of microstreams measured in situ by PSP. The paper is organized as follows. In Section <ref>, we describe the numerical model. In Section <ref>, we show the results for the main simulation setup, along with some discussion of the results. In Section <ref>, we conclude this study and comment on possible future work. § SIMULATION DETAILS §.§ Model Description We investigate the dynamics of magnetic reconnection in a solar corona undergoing flux emergence. Flux emergence is a process by which new magnetic field lines emerge from the solar surface and enter the corona, leading to the formation of current sheets. One type of reconnection process that has received significant attention is the tearing instability, which occurs when small perturbations in a current sheet grow and lead to the break-up of the sheet into magnetic islands, or “plasmoids”. The tearing mode allows reaching fast reconnection, provided that the Lundquist number S=L v_A /η is high enough (L is the length of current sheet, v_A Alfvén velocity, η magnetic diffusivity), typically 10^4 in 2D configurations <cit.>. A particularly interesting dynamics occurs when a current sheet system is thinning. Linear theory shows that the tearing mode is then triggered as soon as the sheet aspect ratio reaches ∼ S^-1/3 <cit.>. In our case, as the bipolar flux emerge, a current sheet is formed at the contact of the opposite polarity and interchange reconnection occurs. The current layer is then expected to develop magnetic islands or plasmoids. To study this process, we solve the 2.5D compressible resistive MHD equations, using the PLUTO code <cit.>, a finite-volume shock-capturing code. We employ a second-order Runge–Kutta method to calculate the time step and a fourth-order spatial scheme provided by a parabolic reconstruction. We use a Harten-Lax-van Leer discontinuities HLLD solver <cit.>. The solenoidal constraint on the magnetic field is ensured through the constrained transport method <cit.>. The equations can be written as follows: ∂/∂ tρ+∇·ρv=0 ∂/∂ tρv+∇·(ρvv-BB+I p)=-ρ∇Φ ∂/∂ t(E+ρΦ)+∇·[(E+p+ρΦ) v -B(v·B) +(η·J) ×B]=Q_h-Q_c-Q_r ∂/∂ tB+∇·(vB-Bv)-η∇^2 B=0 where E ≡ρ e+ρ v^2 / 2+B^2 / 2 is the background flow energy, B is the magnetic field, ρ is the mass density, v is the velocity field, p=p_th +B^2 / 2 is the total pressure (thermal and magnetic), I is the identity matrix, J= ∇×B is the electric current , e=p_th/(γ-1)ρ is the specific internal energy density and η is the magnetic diffusivity. Finally, γ = 5/3 is the ratio of specific heats, and the terms Q_* represent the terms of volumetric energy gain and loss, heating, thermal conduction, and radiative losses. The system is solved in spherical coordinates (r, θ), and the gravity potential, Φ=-GM_⊙/r The heating term is defined as: Q_h=F_h / H(R_⊙/r)^2 exp(-r-R_⊙/H) The energy flux from the photosphere, denoted as F_h, has a value of 1.5 × 10^5 erg cm^-2 s^-1 <cit.> and H ∼ 1 R_⊙. The thermal conduction is written as Q_c=∇·(αq_s+(1-α) q_p) where q_s=-κ_0 T^5 / 2∇ T is the Spitzer-Härm thermal conduction with κ_0=9 × 10^-7cgs, and q_p=3 / 2 p_thv_e is the electron collisionless heat flux described in <cit.>. The coefficient α=1 /(1+(r-R_⊙)^4 /(r_coll -R_⊙)^4) creates a smooth transition between the two regimes at a characteristic height of r_coll =5 R_⊙. We have used an optically thin radiation cooling prescription, Q_r=n^2 Λ(T) with n the electron density and T the electron temperature. Λ(T) follows the prescription of <cit.>. In the PLUTO code, all quantities are expressed in dimensionless units derived from physical quantities divided by normalization units, appropriate for solar wind conditions. We consider unit length L_0=1 R_⊙∼ 6.9570 × 10^5 km, unit density ρ_0=1.67 × 10^-12 kg m^-3. The velocities are normalized to the keplerian velocity v_0 ∼ 437 km/s. The characteristic magnetic field and unit time are B_0=√(4 πρ_0 v_0^2)∼ 2 G and t_0=L_0 / U_0=1593  s respectively, we set the magnetic diffusivity η to 10^12cm^2/s . The simulation is integrated on a non-uniform grid with a strong refinement of Δ r=10^-4 in the 0.1R_⊙ region in code units and a coarser grid extending up to 20R_⊙. The range for θ is [π/2-0.145,π/2+0.305] with a stretched grid cell size of Δθ=2 × 10^-4, the total grid size is 1536 × 1536. We consider reflective boundaries for the velocity components across θ_min and θ_max. At r=20R_⊙ > r_A,f the fast magnetosonic point, we use outflow boundary condition. For the inner boundary condition, we apply a time-dependent boundary condition, to control the emergence rate of the two polarities. We initialize the atmosphere with a supersonic Parker wind and a purely radial field of 2G at the surface. We assume coronal conditions and do not include a chromosphere or transition region. We let the system reach a steady state before starting the flux emergence. A bipole of 10G amplitude and ∼ 10^∘ of latitudinal extent is projected on the spherical harmonics base (with l_max = 65), and each coefficient is added to the background field in the boundary and increased linearly during the emergence phase. The spherical harmonics decomposition ensures that ∇· B=0 at all times. We focus on four setups with emergence rates of 1.8G / h, 2.26G / h, 3G / h and 4.52G / h, which are described in Figure <ref>. The duration of emergence for each setup is 5.53, 4.4, 3.3, and 2.2 hours, respectively. In 2.5D, the Lundquist number S=L v_A/η should be higher than a critical value of approximately 10^4 to trigger the tearing instability <cit.>. Hence, we set an explicit value of η to be comfortably above this threshold (note that this depends on the length of the current sheet, and will vary with time as the flux emergence proceeds). But we must also be careful that the numerical resistivity remains smaller than or equal to the value of η chosen. Based on our experience of the onset of the tearing mode with the PLUTO code <cit.>, we find that a value of η = 10^12 cm^2/s and the above described grid resolution satisfy these requirements with S_η∼ S_num∼ 10^4. We have also performed some simulations with η = 10^11 cm^2/s, and did not notice significant changes. § RESULTS §.§ Study of the current sheet formation and aspect ratio Figure <ref> shows the time evolution of the simulation for a rate of emergence 1.8G/h. We chose three characteristic phases of the emergence and relaxation phase, showing the logarithm of the out-of-the-plane current density and the radial velocity (top and bottom panels). In the early stages of emergence, the current sheet is created immediately and the tearing instability is triggered. The current sheet (CS) is disrupted several times, but continues to lengthen as the emergence continues and reaches its plateau. Close to the peak of the emergence (middle panel), reconnection occurs and plasmoids are ejected on both sides of the CS. Finally, after the emergence has stopped, the CS slowly decays, shortening, thus stopping the reconnection process. This stage can be seen in the right panel of Figure <ref>. CS's length (L) is automatically measured by calculating L=| B |/ |∇× B | then fixing a threshold to locate the current sheet <cit.>. Figure <ref>, shows the evolution of the current sheet length obtained by this method. <cit.> introduced the concept of "Ideal" tearing, in which the growth rate of the instability is independent of the Lundquist number. Assume that the ratio of the thickness of the current sheet a to its length scale L scales as S^-α, S is the Lundquist number and α is the power law index. There is a critical value of α at which the growth rate is constant, γ t_A ∼ S^-1+3 α/2=Const, which is equal to 1/3. If α is greater than 1/3, the growth rate tends to diverge with increasing Lundquist number, while if α is less than 1/3, the growth rate tends to zero. We thus expect the reconnection to occur precisely at α=1/3, when the current sheet forms from large aspect ratios. In the first panel of Figure <ref>, we show the estimated value of α as a function of time. We notice that during the flux emergence phase, α increases and reaches values very close to 1/3, when reconnection begins. This suggests that the tearing mode is effectively ideal, and that the reconnection rate obtained in the simulations should be close to realistic values. Interestingly, as shown in Figure <ref>, the Alfvén time t_A = L/v_A, varies much less than the current sheet length, and remains close to 3 minutes throughout the emergence phase. This is due to the linear dependence of L, that follows the linear increase of v_A during the emergence (v_A is computed in the bipole, away from the current sheet, as in usual tearing mode analysis). The value of t_A is also constant for different emergence rate, which will have consequences on the measured periodicity of the reconnection jets (see section <ref>). Once the flux emergence phase is complete, we observe a decrease in both the current sheet length and the value of α. This indicates that the magnetic field is settling and converging to the X-point. The decrease in α indicates that the CS (current sheet) becomes thicker and more diffuse, and the magnetic field lines are less tightly packed. Once the current sheet starts thickening, the tearing reconnection essentially stops. As the emergence phase ceases, a quasi-steady reconnecting phase begins, during which the CS starts to diffuses. To quantify the behavior of the current sheet, we calculated L over time for all our simulations, as illustrated in Figure <ref>. There is a clear linear relationship between L and the rate of flux emergence, suggesting that the rate of emergence is a key factor in the evolution of L. Furthermore, we observed that the decay of the CS followed a linear trend with the same slope for the same magnetic diffusivity. This implies that the rate of decrease in the L over time is consistent and depends on the chosen η value. A higher/lower η leads to a slower/faster decay of L, as η affects the efficiency of the steady magnetic reconnection. §.§ Formation and propagation of jets and reconnection plasmoïds In all the setups, we observed recurrent jets and velocity spikes. plasmoids repeatedly form and are ejected from the current sheet, triggering sequences of perturbations of the plasma feeding the solar wind above the cusp of the forming pseudo-streamer. In Figure <ref>, we present the maximum value of the radial velocity for each setup of flux emergence rate. It shows that flux emergence rate has a direct impact on the speed of the triggered jet, the higher the emergence rat the amplitude of the jet is important, jet amplitudes varies from 50 to 200 km/s. Furthermore, the reconnection process heats the plasma and creates density structures that can be related to EUV observations. §.§.§ Synthetic EUV emission A 2.5D MHD simulation provides electron densities and temperatures (r,θ, ϕ) inside a 2-D plane. We can construct a 3-D cube necessary to compute EUV images by multiplying the 2-D plane over a depth L_los along the line of sight supposed perpendicular to the 2-D simulation plane. The emission (dI_j) from the plasma in each cell (j) of the cube can then be calculated. For the Solar Dynamic Observatory (SDO) EUV bands, the dominant process assumed for the observed emission is excitation by electron-ion collisions followed by spontaneous emission. The expression for the emission is given by: dI_j = A_f · G(T_j, n_j) · n_j^2 · dV Here, A_f represents the spectral response function of the instrument being simulated, G(T_j, n_j) encapsulates the atomic physics involved in the spectral line formation and is dependent on the local electron density (n) and temperature (T). The values of A_f are provided by the The Solar Dynamics Observatory/Atmospheric Imaging Assembly instrument team (SDO/AIA, distributed in the SolarSoft library), while G(T_j, n_j) is the contribution function calculated using the CHIANTI atomic physics database <cit.> by assuming ionization equilibrium and coronal composition. The values of n_j and T_j are derived from the output of the MHD model. The simulated AIA images are generated in units of Dn/s, which is the calibrated data unit. Using this methodology, the synthetic images should capture the emission properties of the corona and allow comparison with the observational data obtained by SDO/AIA. Figure <ref> shows the EUV emissions observed in 193 and 171. We can see clearly that the plasmoid is discernible in 193, for this reason, for the main analysis, the 193 band was chosen due to its response function peaking at a temperature of approximately 1.5 MK reached by the plasmoids. Figure <ref> show the integrated intensity of the emission in 193 for, respectively, several values of magnetic diffusivity and different flux emergence rates where we have captured the plasmoids more clearly so that we can focus on the emission triggered by magnetic reconnection. We observe that the rate of flux emergence influences the amplitude of brightening in EUV 193. A fast magnetic flux emergence rate of 4.52 G/h results in an emission amplitude of 3.10^4 Dn/s, while a slow flux emergence rate of 1.8 G/h results in an EUV brightness that is ten times smaller. §.§.§ Periodicity of jetlet-associated brightenings versus periodicity of outflows Our simulations reveal that the emerging bipole exhibits rapid brightening in the EUV 193 channel as it interacts with the overlying magnetic field. This brightening is mainly driven by magnetic energy being converted into heat in the current sheet layer, as well as kinetic energy in the sequence of outflowing jets via magnetic reconnection. EUV brightenings and jets should be considered as the macroscopic signatures of local and bursty energy releases that develop along the reconnecting layer at the surface of the emerging bipole. Investigating the quasi-periodicity of these energy releases is important because it provides insights into the relationship between energy releases and the rate of magnetic flux emergence, which drives the underlying footpoint exchange mechanism studied here. We performed a wavelet analysis using the wavelet software package of <cit.> to study the periodicity of the oscillating emission intensity and the radial velocity V_r of the jets for the different flux emergence rates. Figure <ref> shows the wavelet spectral analysis of the emission and V_r for a flux emergence with a rate of 2.26 G.h^-1. The signal was detrended by subtracting a linear polynomial fit from the original data. The resulting detrended signal was then normalized by dividing it by its standard deviation to ensure that it has a zero mean and unit variance. After obtaining the detrended and normalized signal, the next step was to perform a wavelet analysis using a chosen mother wavelet and its associated parameters. The wavelet transform was used to decompose the signal into its different frequency components over time. The resulting wavelet coefficients were then used to reconstruct the signal using the inverse wavelet transform. The final result is a detrended signal that is free of any linear trend and can be further analyzed for its frequency and time domain characteristics using the wavelet coefficients to obtain the resulting periodicity. Figure <ref> shows the periodicity of radial velocity spikes and the 193 EUV emission for several flux emergence rates and for several magnetic diffusivity values. First, we notice that the radial velocity oscillation is unchanged with the flux emergence rate. We observe a strong correlation between the periodicity of the jet outflows and the emission intensity oscillations. The periodicity of radial velocity variations do not change with flux varying emergence rate, suggesting that the velocity spikes are launched at ideal time scales. In fact, magnetic reconnection driven by flux emergence acts as a trigger for the formation and release of plasmoids, which are plasma structures. A subset of these plasmoids propagate within the CS either sunward and anti-sunward. However, the sunward propagation is less dominant than anti-sunward. The presence of plasmoids propagating in both directions influences the periodicity of the observed velocity spikes. As a result, the periodicity of the velocity spikes is less than the full current sheet disruption and reformation cycle, as the jets correspond only to the outward-moving plasmoids. This constant periodicity over the various simulations parameters can be interpreted as follows: the Alfvén time t_A away from the current sheet is approximately 3 minutes, and as we are in the ideal tearing reconnection regime γ t_A, the normalized growth rate, is close to unity. Hence, once the CS has reached the ideal tearing ratio, the periodicity of jets is directly related to the time for the CS to be disrupted, i.e., a few times t_A, which leads to the observed 19 minutes. Moreover, as shown in Figure <ref>, the Alfvén time stays roughly constant throughout the emergence phase, as the current sheet lengthen and adapt to the increasing magnetic field strength in the bipole. Indeed, in our setup it is strictly equivalent to increase either the emergence speed or the bipole amplitude, and thus t_A remains close to this 3 minute value in all the setups. Note that as S ∼ 10^4, the γ t_A ∼ cste, may not be fully reached yet. This nonetheless suggests that the periodicity of EUV brightenings and velocity jets may follow a universal rule and depend only on the local value of t_A. These results are moreover consistent with previous observations reported by <cit.>, who also found that the oscillations in the plume exhibit a range of periods similar to those of the jet. §.§ Alfvén waves versus magnetoacoustic waves: Which wave mode dominates the coronal jets? There has been a debate in the community as to whether propagation disturbances (PDs) in plumes observed in the solar corona are plasma outflows or slow-mode waves <cit.>. The work of <cit.> suggests that both of these interpretations may be correct, as reconnection at the footpoints of the plumelet drives flows that can be observed as "jetlets" and generate Alfvénic fronts, but the dense material in the jets travels more slowly and an inhomogeneous wake of shear and compressible turbulence should be observed between the jet and the Alfvénic front. In our simulations, the flux emergence increases L, and when the tearing instability is triggered, plasmoids are then generated that move up and down the X-line. It is important to understand what type of wave this process would generates? transverse Alfvén wave or magnetoacoustic waves ? To check this, we start with the linearized equation of ideal magnetohydrodynamics (MHD), three well-known modes can be distinguished: fast and slow magnetoacoustic waves, as well as (transverse) shear Alfvén waves. For shear Alfvén wave, we have: v_pA=ω/k=v_Acosθ δv/v_A= ±δB/B_0 δρ=0 δ|𝐁| =0 Where v_A is the Alfvén speed, θ is the angle between wavevector k and magnetic field B, δv, δB, and δρ are the perturbed plasma velocity, magnetic fields, and plasma density, respectively, and B_0 is the background magnetic magnitude. For slow and fast waves, the phase speeds are v_ph±^2 =(ω/k)^2 =1/2(v_S^2+v_A^2) ±1/2[(v_S^2+v_A^2)^2-4 v_S^2 v_A^2 cos ^2 θ]^1 / 2 To simplify our analysis, we focus on the case where v_S << v_A which verified when r<R_A (below the Alfvén radius). We then have for fast magnetoacoustic waves <cit.> : δρ/ρ_0 =δ|𝐁|/B_0 v_ph+^2 ≅ v_A^2 |δρ/ρ_0|< 0.1 and for slow magnetoacoustic waves: |δρ/ρ_0|=|δ𝐯/v_S| v_ph-^2 ≅ v_s^2 cos ^2 θ (|δρ/ρ_0|> 0.1) δ|𝐁|≅ 0 We show in Figure <ref> a 2-D map of the density variations relative to the local density in the high corona and the nascent solar wind. These density variations suggest compression waves are triggered and propagate along the plume that has formed above the cusp of the emerging bipoles. These density fluctuations originate in the plasmoids ejected outwards during the magnetic reconnection process. As the size of magnetic islands becomes larger, the medium surrounding the plasmoids is compressed by the outflowing structure, leading to changes in the density. Figure <ref> presents the magnetic magnitude perturbation, density fluctuation, and magnetic field fluctuation, a compelling correlation emerges between the fluctuations in density and the perturbation in the magnetic field. Upon analyzing the data presented in Figure <ref>, a clear relationship between density fluctuations and velocity perturbations emerges. It is evident that |δρ/ρ_0|∼|δ𝐯/v_S|. Given the additional constraint that |δρ/ρ_0|> 0.1 and that we have checked that (δ|𝐁_0|) ≅ 10^-5≪ 1, the characteristics exhibited by the jets align remarkably well with the behavior expected from slow magneto-acoustic waves. These findings provide compelling evidence supporting the notion that the observed jet is a direct outcome of the interchange reconnection process. It is essential to compare the Alfvénicity of all setups and see if the emergence rate has any effect on the behavior of the generated wave. In particular, high Alfvénicity can lead to more efficient wave propagation, while low Alfvénicity can result in weaker waves that are more easily dissipated <cit.>. The emergence rate can also affect the strength of the waves, as faster emergence rates can lead to stronger wave amplitude. Four heliocentric radial distances (in R_⊙) were chosen within the bipole stalk, identified by its polar coordinates (r,θ)=[(1.5,1.5),(2.1,1.5),(3.4,1.5),(11.9,1.5)] where θ is the polar angle expressed in radian. The time series of the radial magnetic field and radial velocity were then traced at each of these positions. To calculate their average velocity over the θ plane and time. To do this, we used the equation for the tangential average of a variable Z, which is defined as: Z̅≡1/θ_max-θ_min∫_θ_min^θ_max d θ Z, where θ represents the angle in the θ plane, and Z is a variable that we are interested in calculating spatial average of Z. The time-moving average of a variable is represented by the symbol ⟨⋯⟩. The deviation of a variable from its average value over time and horizontal space is represented by δ Z ≡Z̅ - ⟨Z̅⟩ We define the cross helicity, H_c, as H_c=2 · (δ𝐯_𝐀·δ𝐯)/δ𝐯_𝐀^2+δ𝐯^2, H_c quantifies the degree of correlation in the fluctuating velocity and magnetic field components. This provides information on the nature and direction of the propagating waves. In the case of Alfvén waves, the fluctuating components of velocity and magnetic fields, represented by δ v and δ v_A, respectively, are defined by the equation δ v=±δ v_A, where δ v_A is the perturbed Alfvén velocity. The propagation direction of Alfvén waves is indicated by the sign in the definition. A '+' sign denotes propagation antiparallel to the local mean magnetic field B_0, while a '-' sign denotes propagation parallel to B_0. In the solar wind, particularly in the inner heliosphere, Alfvén waves predominantly propagate outward from the Sun. In the following analysis, we employ the same convention for the sign of the correlation between the velocity and magnetic fields. The '-' sign is used to indicate fluctuations propagating outward into the heliosphere, while the '+' sign denotes inward-propagating fluctuations. If H_c is close to -1, this means that there is anti-correlation expressed in Elsässer variable as z^-=δ v- δ v_A, found in upward-propagating Alfvénic waves. Figure <ref> show the time series of both tangential and radial perturbations of Alfvén and plasma velocities for several locations in the radial direction. The negative correlation between δ v_A,r and δ v_r that we observed in our simulation indicates that when the velocity fluctuation of Alfvén waves (δ v_A,r) increases, the plasma velocity fluctuation (δ v_r) decreases and vice versa. This suggests that the Alfvén waves are moving outward. Since the background magnetic field is radial and positive, it is expected that the Alfvén waves will primarily propagate in the same direction as the positive radial magnetic field, away from the Sun. The radial perturbation is decreasing with solar radius the same for the tangential direction and this is due to the diffusive nature of slow magnetoacoustic waves but as soon as we get higher in altitudes the plasma perturbation is less damped, and we are converging to the incompressible Alfvén wave shown in Figure <ref>. §.§ Stable oscillation Various studies have investigated the transverse motion of coronal jets <cit.>. Our results reveal maximum transverse motion about v_θ∼ 20km/s which align with cited studies above. Figure <ref> illustrates whip-like motion following the triggered reconnection, the evolution of the current sheet leads to the propagation of velocity spikes characterized by developing fluctuations along the left side of the pseudo-streamer stalk. A similar behavior can be observed in the density perturbation, as shown in Figure <ref>, where a wisp-like structure is omnipresent with velocity spikes that are initiated by magnetic reconnection within the CS and move away from the stalk as previously reported in observation <cit.>. In particular, these whip-like oscillations consistently align on one side, contributing to the formation of a stable transverse oscillation phenomenon in Figure <ref>. The emergence rate has an effect on the jet expansion in the theta direction, this lateral expansion that we see in our simulation has been observed in <cit.> describing expanding jets as "curtain-like spires". So, our result gives two flavor of transverse motion: the oscillation motions which come from ideal tearing reconnection and the expanding motion which is influenced by flux emergence rate. § DISCUSSIONS In this work, we have studied the effect of magnetic flux emergence into 2.5D resistive MHD simulations of the solar corona and wind. Our study shows that the emerging process leads naturally to interchange reconnection with the ambient coronal solution and creates jets, or velocity spikes, that then propagated into the solar wind. We observe two main phases of reconnection process. First, shortly after the start of the emergence, the current sheet is created, then lengthens and thins until it reaches the aspect ratio a/L ∝ S^-1/3. Fast reconnection then proceeds through the so-called 'ideal' tearing instability, creating plasmoids that are ejected either towards the Sun and inner boundary condition or towards the fan of the pseudo-streamers created in the corona. The island propagate at high speed along the CS and appear to hit the stalk of the pseudo-streamer. The plasmoid is eroded and triggers slow magneto-acoustic waves with jets of amplitude up to 200 km/s. Following the completion of the flux emergence phase, we observe a decrease in both the current sheet length and α. This suggests that the magnetic field is settling into a more stable state. The current sheet then diffuses and no more bursty reconnection occurs. The recent study of <cit.>, have studied interchange reconnection with 3D ideal MHD simulations of pseudo-streamers with the solar corona. Although they reach similar conclusions on the creation of Alfvénic structures in the fan of pseudo-streamers, our work complement and differ by a few important points. First, by precisely controlling the explicit resistivity of the model, we show that the tearing instability is in the ideal regime, which ensures that the reconnection properties should be relatively independent of the Lundquist number and close to the low coronal regime where S ∼ 10^14. Second, while <cit.> triggers reconnection by surface motions, emergence suffices in our case. Finally, varying the emergence rate of the bipole, or equivalently the amplitude of the emerging flux, we have shown that the periodicity of the jets matches the periodicity of the EUV emission of the plasma and that it is roughly independent of the emergence rate. This is due to the fact the characteristic Alfvén time t_A = L/v_A ∼ 3 minutes, remains unchanged for higher emergence rates and magnetic field amplitudes, as the current sheet lengthens proportionally to the Alfvén speed. The time between each jet is thus the time for the current sheet to be disrupted, i.e., a few t_A, or 19 minutes. These findings are consistent with previous observations reported by <cit.>, further supporting the connection between plume oscillations and jet periodicity. Nevertheless, the rate of flux emergence has a significant impact on the observed emission amplitudes in the extreme ultraviolet (EUV) range. Higher flux emergence rates correspond to larger emission amplitudes, as well as higher amplitudes of the velocity spikes. Several recent studies based on the observations of Parker Solar Probe suggests that switchbacks may be caused by jetlets originating from small bipoles located at the base of coronal plumes in coronal holes <cit.>. <cit.> already suggested that the jetlets may also generate microstreams, which are fluctuations in solar wind speed and density observed in polar coronal holes. Although our current results do not directly demonstrate the generation of magnetic reversals through the emergence of bipoles, they do reveal the presence of Alfvénic perturbations that could potentially evolve into magnetic switchbacks. The study of <cit.> shows a somewhat different structure in 3D, with torsional Alfvén waves launched from the pseud-streamers fan. Yet, no full reversals (or switchbacks) seem able to survive outside of the closed magnetic structures in the simulations. Nonetheless, true switchbakcs could be reformed later on during the propagation int the solar wind, as non-linear developments of seed Alfvén waves, as suggested by <cit.> and <cit.>. This emphasizes the need for further observations and MHD simulations to establish a definitive relationship between magnetic switchbacks and interchange reconnection in the chromosphere and transition region beneath plumes. § AKNOWLEDGEMENTS The research of BG, VR and APR was funded by the ERC SLOW_SOURCE (DLV-819189). The authors are grateful to Kévin Dalmasse, Benoit Lavraud, Peter Wyper, Marco Velli and Nour E. Raouafi for insightful discussions. The authors also thank A. Mignone and the PLUTO development team, on which the numerical work presented in this paper is based. The 2.5D MHD simulations were performed on the Toulouse CALMIP supercomputer and the Jean-Zay supercomputer (IDRIS), through the GENCI HPC allocation grant A0130410293. This work benefited also from financial support from the Centre National des Études Spatiales (CNES). aasjournal
http://arxiv.org/abs/2307.00517v1
20230702085133
The novel Tauberian conditions associated with the $(\overline{N},p,q)$ summability of double sequences
[ "Zerrin Önder", "Ekrem Savaş", "İbrahim Çanak" ]
math.GM
[ "math.GM", "40A05, 40E05, 40G99" ]
theoremTheorem[section] *thm-nonTheorem lemma[theorem]Lemma corollary[theorem]Corollary definition[theorem]Definition proposition[theorem]Proposition remark[theorem]Remark observation[theorem]Observation example[theorem]Example *exp-nonExamples notation[theorem]Notation plain genericthm[theorem] namedtheorem[1] abstract 1]Zerrin Önderfunding zerrin.onder11@gmail.com 1]Ekrem Savaş ekrem.savas@usak.edu.tr 2]İbrahim Çanakcor1 ibrahim.canak@ege.edu.tr [funding]This author is supported by The Scientific and Technological Research Council of Turkey (TUBITAK) under 2218 - National Postdoctoral Research Fellowship Program (Grant No. 118C577). [cor1]Corresponding author [1]organization=Usak University, addressline=Department of Mathematics, postcode=64000, city=Usak, country=Turkey [2]organization=Ege University, addressline=Department of Mathematics, postcode=35100, city=Izmir, country=Turkey In this paper, our aim is to make a novel interpretation of relation between (N,p,q) method, being product of relevant one-dimensional summability methods, and P-convergence for double sequences. In accordance with this aim, we derive some Tauberian conditions, controlling O_L- and O-oscillatory behavior of a double sequence in certain senses, from (N,p,q) summability to P- convergence with some restrictions on the weight sequences. As special cases, we indicate that O_L-condition of Landau type with respect to (P_m) and (Q_n) and O-condition of Hardy type with respect to (P_m) and (Q_n) are Tauberian conditions for (N,p,q) summability under some additional conditions. Therefore, these results contain all of the classical Tauberian theorems including slow decrease or slow oscillation conditions in certain senses. Double sequencesconvergence in Pringsheim's sense (N,p,q) summabilityregularly varying sequencesslowly decreasing sequencesslowly oscillating sequencesTauberian conditions and theorems weighted mean summability method [2010] 40A0540E0540G99 § INTRODUCTION In the early 1900s, which the concept of convergence in Pringsheim's sense (or concisely P-convergence) came to light, the evolvement of summability theory for single sequences to multiple ones was in its infancy. After the concept of double sequence was investigated by Hardy <cit.> and Bromwich <cit.> in detail, studies on this new-type sequences had gathered tremendous momentum. When it comes to the weighted mean methods applied to double sequences, it is (to our knowledge) encountered with Baron and Stadtmüller <cit.> as a beginning. In <cit.>, they analyzed relations between (N,p,q) method, being product of related one-dimensional methods, and P-convergence for double sequences and they revealed that necessary conditions for (boundedly) P-convergence of a double sequence which is (boundedly) (N,p,q) summable are O-condition of Hardy type relative to (P_m) and (Q_n) with regularly varying weights (P_m) and (Q_n). Using non-factorable weights instead of factorable weights used in the preceding one as base, Stadtmüller <cit.> both generalized O_L-Tauberian conditions given by Móricz <cit.> for (C, 1, 1) method and demonstrated that these conditions could be weakened. Following <cit.>, Chen and Hsu <cit.> established some Tauberian theorems for double sequences dealing with implication from (N,p,q) summability to P-convergence under Landau-type conditions, Schmidt-type slow decrease conditions and more general conditions involving the concept of deferred means. Reducting assumptions asserted by Stadtmüller <cit.>, Móricz and Stadtmüller <cit.> examined some conditions needed for (boundedly) (N,p,q) summable double sequences to be (boundedly) P-convergent by using the classes Λ_u and Λ_ℓ constructed based on non-factorable weights. Lastly, Belen <cit.> introduced the concepts of double weighted generator sequences in certain senses, which represent difference between double sequences and their (N,p,q) means, and pointed out that certain conditions formed via these sequences such as P_m-1Δ_10V_mn^11^(0)(Δ_11(u))=O_L(p_m) and Q_n-1Δ_01V_mn^11^(0)(Δ_11(u))=O_L(q_n) are Tauberian conditions for (N,p,q) method with some additional conditions imposed on the weights (P_m) and (Q_n). Together with the mentioned studies so far, the matter that urges us to make this work is the idea of furthering the results obtained by Boos <cit.> for single sequences by extending to (N,p,q) summable double sequences. In <cit.>, Boos formulated these results as follows: Let (p_n) be a sequence which has the property P_n/P_n+1→ 1 as n→∞. If a sequence (u_n) of real numbers is (N,p) summable to ℓ and slowly decreasing relative to (P_n), then (u_n) is convergent to ℓ. Let (p_n) be a sequence which has the property P_n/P_n+1→ 1 as n→∞. If a sequence (u_n) of complex numbers is (N,p) summable to ℓ and slowly oscillating relative to (P_n), then (u_n) is convergent to ℓ. One of purposes of this paper is to extend Theorems <ref> and <ref> given for (N,p) summable sequences of real and complex numbers to (N,p,q) summable double sequences of real and complex numbers. The other is to indicate that our results obtained in this paper include all of the classical Tauberian theorems for double sequences which P-convergence follows from Cesàro and logarithmic summability under slow decrease (or oscillation) conditions relative to Schmidt and slow decrease (or oscillation) conditions relative to logarithmic summability in certain senses, respectively. Herein, the main issue to be discussed is in what ways the conditions imposed on the weights (p_m), (q_n) or its partial sums (P_m), (Q_n) should change while the ones imposed on the sequence (u_mn) become more inclusive. To reach an answer about this, we need the class SVA_reg(α) and its characterization. In the present paper, we are interested in relation between (N,p,q) method, being product of relevant one-dimensional summability methods, and P-convergence for double sequences. In accordance with this aim, we derive some Tauberian conditions, controlling O_L- and O-oscillatory behavior of a double sequence in certain senses, from (N,p,q) summability to P- convergence with some restrictions on the weight sequences. § PRELIMINARIES In this section, we preface with basic definitions and notations in regards to double sequences and their weighted means. Subsequent to these, we introduce the concepts of slow decrease relative to (P_m) and (Q_n), and slow oscillation relative to (P_m) and (Q_n) for double sequences of real and complex numbers and exhibit how a relation exists between newly-described concepts. We put an end to this section by familiarizing the class SVA, its characterization and two of its subclasses. A double sequence u=(u_mn) is a function u from ℕ×ℕ into the set 𝕂 (𝕂 is ℝ, the set of real numbers or ℂ, the set of complex numbers). The real or complex number u_mn denotes the value of the function u at a point (m,n) ∈ℕ×ℕ and is called the (m,n)-term of the double sequence. The set of all double sequences of real and complex numbers is denoted by w^2(ℝ) and w^2(ℂ), respectively. A double sequence (u_mn) is said to be P-convergent to ℓ provided that for all ϵ >0 there exists a n_0=n_0(ϵ)∈ℕ such that |u_mn-ℓ|<ϵ whenever m, n ≥ n_0 (see <cit.>). The number ℓ is called the P-limit of u and we denote it by P-lim _m, n →∞ u_mn=ℓ, where both m and n tend to ∞ independently of each other. The set of all P-convergent double sequences of real and complex numbers is denoted by c^2(ℝ) and c^2(ℂ), respectively. A double sequence (u_mn) is said to be bounded (or one-sided bounded) provided that there exists a constant M>0 such that |u_mn|≤ M (or u_mn≥ -M) for all m, n∈ℕ. The set of all bounded double sequences of real and complex numbers is denoted by ℓ_∞^2(ℝ) and ℓ_∞^2(ℂ), respectively. Note that (u_mn) may converge without (u_mn) being a bounded function of m and n. To put it more explicitly, P-convergence of (u_mn) may not imply boundedness of its term in contrast to the case in single sequences. For example, the sequence (u_mn) defined by u_mn= 7^n if m=1; n=0,1,2,… , 7^m+2 if n=3; m=0,1,2,… , 2 otherwise is P-convergent, but it is unbounded. Some notations that will be used in places throughout this paper are given below. Let (u_mn) be a double sequence. (i) The symbol u_mn=O(1) means that |u_mn|≤ H for some constant H>0 and each m, n≥ n_0. (ii) The symbol u_mn=O_L(1) means that u_mn≥ M for some constant M>0 and each m, n≥ n_0. (iii) The symbol u_mn=o(1) means that u_mn→ 0 as m, n→∞. Let u=(u_mn) be a double sequence of real or complex numbers and let p=(p_mn) be a double sequence of positive integers such that P_mn:=∑_i=0^m∑_j=0^np_ij→∞ as max{m,n}→∞. The weighted means of (u_mn) with respect to the weights (p_mn) are defined by σ_mn:=1/P_mn∑_i=0^m∑_j=0^np_iju_ij for all (m,n)∈ℕ×ℕ and P_mn>0. The following theorem proved by Kojima and Robinson <cit.> states necessary and sufficient conditions for regularity of transformation σ_mn in the most general sense. The necessary and sufficient conditions that every P-convergent double sequence imply P-convergence of its weighted means to same number under boundedness condition of double sequence are P_mi/P_mn→ 0 and P_jn/P_mn→ 0 as m, n→∞ for any constant i, j∈ℕ. In this paper, we deal only with a special class of weights which can be factorized in the form of p_mn=p_mq_n where the single sequences (p_m) and (q_n) are positive weights such that P_m:=∑_i=0^m p_i→∞ and Q_n:=∑_j=0^n q_j→∞ as m, n→∞. In the present case, the weighted means (σ_mn) defined in (<ref>) transform (u_mn) into the form of σ_mn:=1/P_mQ_n∑_i=0^m∑_j=0^np_iq_ju_ij for all (m,n)∈ℕ×ℕ and P_mQ_n>0. A double sequence (u_mn) is called summable by the weighted mean method determined by the sequences (p_m) and (q_n) or shortly (N,p,q) summable to ℓ provided that (σ_mn) is P-convergent to ℓ. As a result of Theorem <ref>, it can be seen that necessary and sufficient condition for regularity of the (N,p,q) method is condition (<ref>). To put it another way, every P-convergent and bounded double sequence is also (N,p,q) summable to same number under condition (<ref>). Nevertheless, the opposite of this proposition is not true in general. The question of whether some (nontrivial) condition on the terms u_mn under which its (N,p,q) summability implies its P-convergence exist comes to mind at this point. The condition T{u_mn} making such a situation possible is called a Tauberian condition. The resulting theorem stating that P-convergence follows from its (N,p,q) summability and T{u_mn} is called a Tauberian Theorem for the (N,p,q) method. At present, we define the concepts of slow decrease and slow oscillation relative to (P_m) and (Q_n), for double sequences of real and complex numbers, respectively, besides we mention a relation between them. A double sequence (u_mn) of real numbers is said to be slowly decreasing relative to both (P_m) and (Q_n) provided that lim_λ→ 1^+ κ→ 1^+lim inf_m,n→∞min_P_m≤ P_i≤λ P_m Q_n≤ Q_j≤κ Q_n(u_ij-u_mn)≥0; that is, for each ϵ >0 there exist n_0=n_0(ϵ)∈ℕ, λ =λ(ϵ)>1 and κ =κ(ϵ)>1 such that u_ij-u_mn≥ -ϵ whenever n_0≤ m≤ i, n_0≤ n≤ j and 1≤P_i/P_m≤λ, 1≤Q_j/Q_n≤κ. Condition (<ref>) is equivalent to lim_λ→ 1^- κ→ 1^-lim inf_m,n→∞min_λ P_m<P_i≤ P_m κ Q_n<Q_j≤ Q_n(u_mn-u_ij)≥0. <ref>' A double sequence (u_mn) of complex numbers is said to be slowly oscillating relative to both (P_m) and (Q_n) provided that lim_λ→ 1^+ κ→ 1^+lim sup_m,n→∞max_P_m≤ P_i≤λ P_m Q_n≤ Q_j≤κ Q_n|u_ij-u_mn|=0; that is, for each ϵ >0 there exist n_0=n_0(ϵ)∈ℕ, λ =λ(ϵ)>1 and κ =κ(ϵ)>1 such that |u_ij-u_mn|≤ϵ whenever n_0≤ m≤ i, n_0≤ n≤ j and 1≤P_i/P_m≤λ, 1≤Q_j/Q_n≤κ. Condition (<ref>) is equivalent to lim_λ→ 1^- κ→ 1^-lim sup_m,n→∞max_λ P_m<P_i≤ P_m κ Q_n<Q_j≤ Q_n|u_mn-u_ij|=0.<ref>' A double sequence (u_mn) of real numbers is said to be slowly decreasing relative to (P_m) provided that lim_λ→ 1^+lim inf_m,n→∞min_P_m≤ P_i≤λ P_m(u_in-u_mn)≥0, or equivalently, lim_λ→ 1^-lim inf_m,n→∞min_λ P_m<P_i≤ P_m(u_mn-u_in)≥0,<ref>' besides it is said to be slowly decreasing relative to (P_m) in the strong sense if (<ref>) is satisfied with min_P_m≤ P_i≤λ P_m Q_n≤ Q_j≤κ Q_n(u_ij-u_mj) instead of min_P_m≤ P_i≤λ P_m(u_in-u_mn). A double sequence (u_mn) of complex numbers is said to be slowly oscillating relative to (P_m) provided that lim_λ→ 1^+lim sup_m,n→∞max_P_m≤ P_i≤λ P_m|u_in-u_mn|=0, or equivalently, lim_λ→ 1^-lim sup_m,n→∞max_λ P_m<P_i≤ P_m|u_mn-u_in|=0,<ref>' besides it is said to be slowly oscillating relative to (P_m) in the strong sense if (<ref>) is satisfied with max_P_m≤ P_i≤λ P_m Q_n≤ Q_j≤κ Q_n|u_ij-u_mj| instead of max_P_m≤ P_i≤λ P_m|u_in-u_mn|. Similarly, the concepts of slow decrease and slow oscillation relative to (Q_n) (in the strong sense) for a double sequence (u_mn) of real and complex numbers can be analogously defined, respectively. Remark that if (u_mn) is slowly decreasing relative to (P_m) in the strong sense and slowly decreasing relative to (Q_n), then (u_mn) is slowly decreasing relative to both (P_m) and (Q_n). Indeed, for all large enough m and n, that is, m,n≥ n_0, λ>1, and κ>1, we find min_P_m≤ P_i≤λ P_m Q_n≤ Q_j≤κ Q_n(u_ij-u_mn) = min_P_m≤ P_i≤λ P_m Q_n≤ Q_j≤κ Q_n(u_ij-u_mj+u_mj-u_mn) ≥ min_P_m≤ P_i≤λ P_m Q_n≤ Q_j≤κ Q_n(u_ij-u_mj)+ min_ Q_n≤ j≤κ Q_n(u_mj-u_mn). Taking lim inf and limit of both sides of (<ref>) as m,n→∞ and λ, κ→ 1^+ respectively, we attain that the terms on the right-hand side of (<ref>) are greater than 0. Therefore, we reach that (u_mn) is slowly decreasing relative to both (P_m) and (Q_n). In harmony with that, it can be said that if (u_mn) is slowly decreasing relative to (P_m) and slowly decreasing relative to (Q_n) in the strong sense, then it is slowly decreasing relative to both (P_m) and (Q_n). Similarly, if (u_mn) is slowly oscillating relative to (P_m) and slowly oscillating relative to (Q_n) in the strong sense, then (u_mn) is slowly oscillating relative to both (P_m) and (Q_n). In harmony with that, it can be said that if (u_mn) is slowly oscillating relative to (Q_n) and slowly oscillating relative to (P_m) in the strong sense, then it is slowly oscillating relative to both (P_m) and (Q_n). In the remainder of this section, we mention the classes including all positive sequences (p_m) whose partial sum sequence (P_m) is (i) regularly varying sequence of positive index, (ii) rapidly varying sequence of index ∞ (see <cit.> for more details). Let p=(p_m) be a sequence that satisfies (p_m)=(P_m-P_m-1), where P_-1=0 and P_m≠ 0 for all m∈ℕ. (i) A sequence (P_m) of positive numbers is said to be regularly varying if for all λ>0 lim_m→∞P_λ_m/P_m=φ(λ) exists, where 0<φ(λ)<∞ (cf. <cit.>). In spite of the fact that this definition has been used by many authors as a starting point for studies including regularly varying sequences, these sequences possess quite useful properties, the most important of which is probably the following characterization theorem. Characterization Theorem (<cit.>) The following statements are equivalent: (a) A sequence (P_m) of positive numbers is a regularly varying sequence. (b) There exists a real number α>0 such that φ(λ)=λ^α for all λ>0. (c) The sequence (P_m) has the form P_m=(m+1)^αL(m) for m≥ 0 with constant α≥ 0 and slowly varying function L(.) on (0, ∞), i.e. the function L(.) is positive, measurable, and satisfies lim_t→∞L(λ t)/L(t)=1 for all λ>0. To emphasize such α, a sequence (P_m) is called a regularly varying sequence of positive index α, as well. Note that a regularly varying sequence of index α=0 corresponds to a slowly varying sequence. The set of all sequences of positive numbers (p_m) satisfying (c) is denoted by SVA_reg(α). Here, it is useful to give the following implication proved by Bojanic and Seneta <cit.>. <cit.> If a sequence P=(P_m) of positive numbers is regularly varying, then P_m-1/P_m→ 1 as m→∞. (ii) A sequence (P_m) of positive numbers is said to be rapidly varying of index ∞ if P_λ_m/P_m→{[ 0 if 0<λ<1,; 1 if λ=1,; ∞ if λ>1 ]. as m→∞. The set of all sequences of positive numbers (p_m) satisfying (<ref>) is denoted by SVA_rap. In addition, it may be written conventionally as λ^∞ because the right hand side of (<ref>) is the limit of λ^α as α→∞. § AUXILIARY RESULTS In this section, we state an auxiliary result to be benefitted in the proofs of main results. The following lemma indicates two representations of difference between general terms of (u_mn) and (σ_mn) and it can be proved when it is make convenient modification in Lemma 1.2 which was presented by Fekete <cit.>. Let u=(u_mn) be a double sequence. (i) For sufficiently large μ> m and η>n, we have u_mn-σ_mn = P_μQ_η/(P_μ-P_m)(Q_η-Q_n)(σ_μη-σ_μ n-σ_mη+σ_mn) + P_μ/P_μ-P_m(σ_μ n-σ_mn) +Q_η/Q_η-Q_n(σ_mη-σ_mn) - 1/(P_μ-P_m)(Q_η-Q_n)∑_i=m+1^μ∑_j=n+1^ηp_iq_j(u_ij-u_mn). (ii) For sufficiently large μ< m and η<n, we have u_mn-σ_mn = P_μQ_η/(P_m-P_μ)(Q_n-Q_η)(σ_mn-σ_μ n-σ_mη+σ_μη) + P_μ/P_m-P_μ(σ_mn-σ_μ n) +Q_η/Q_n-Q_η(σ_mn-σ_mη) + 1/(P_m-P_μ)(Q_n-Q_η)∑_i=μ+1^m∑_j=η+1^np_iq_j(u_mn-u_ij). § MAIN RESULTS FOR THE (N, P, Q) SUMMABLE DOUBLE SEQUENCES OF REAL NUMBERS This section is constructed by considering the following headings for (N,p,q) summable double sequences of real numbers: (a) Determining certain subsets 𝒯{u_mn} of double sequence space and certain subclasses 𝒞{p_m, q_n} of single sequence space having the property: “Under conditions 𝒞{p_m, q_n}, a double sequence (u_mn)∈𝒯{u_mn} being (N,p,q) summable to l∈ℝ is also P-convergent to same value.” (b) Demonstrating with examples how the subsets 𝒯{u_mn} and the subclasses 𝒞{p_m, q_n} change for special means occurring depends on choosing of weight sequences (p_m) and (q_n). (c) Proving the property given in (a) for the related subsets and subclasses. Here, we formulate our main results for double sequences of real numbers as follows: Let (p_m), (q_n)∈ SVA_reg(α) and a double sequence (u_mn) be (N,p,q) summable to a number ℓ. If (u_mn) is slowly decreasing relative to (P_m), slowly decreasing relative to (Q_n), and slowly decreasing relative to (P_m) or (Q_n) in the strong sense, then (u_mn) is P-convergent to ℓ. Let (p_m), (q_n)∈ SVA_reg(α) and a double sequence (u_mn) be (N,p,q) summable to a number ℓ. If (u_mn) satisfies conditions P_m/p_mΔ_10u_mn=O_L(1) and Q_n/q_nΔ_01u_mn=O_L(1), then (u_mn) is P-convergent to ℓ. In conjunction with the weighted means, there are many special means occurring depends on choosing of weight sequences (p_m) and (q_m). Included by the weighted means and also commonly used by researchers in literature, some means are listed with their corresponding subsets 𝒯{u_mn} and subclasses 𝒞{p_m, q_n} for double sequences of real numbers as follows. (i) In case p_m=q_n=1, it leads to the arithmetic means (or called the Cesàro means of order (1,1)) of a double sequence where P_mQ_n=(m+1)(n+1) for all m, n∈ℕ. Under the circumstances, the conditions in Theorem <ref> correspond to slow decrease conditions in senses (1,0), (0,1) and in the strong sense (1,0) or (0,1), besides conditions (<ref>) correspond to conditions mΔ_10u_mn=O_L(1) and nΔ_01u_mn=O_L(1), however slow decrease condition in sense (1,1) and condition mnΔ_11u_mn =O_L(1) discussed by Móricz <cit.> therein are superfluous. As a result of our main Theorem <ref> and Theorem <ref>, the mentioned conditions are sufficient Tauberian conditions for the (C,1,1) summability of double sequences of real numbers. (ii) In case p_mq_n=1/(m+1)(n+1), it leads to the harmonic means (or called the logarithmic means) of a double sequence where P_mQ_n∼log m log n for all m, n∈ℕ. Under the circumstances, the conditions in Theorem <ref> correspond to conditions of slow decrease with respect to summability (L,1) in senses (1,0), (0,1) in the strong sense (1,0) or (0,1),i.e., lim_λ→ 1^+lim inf_m,n→∞min_m≤ i≤ m^λ(u_in-u_mn)≥ 0, lim_κ→ 1^+lim inf_m,n→∞min_n≤ j≤ n^κ(u_mj-u_mn)≥ 0 and shall we say lim_λ, κ→ 1^+lim inf_m,n→∞min_m≤ i≤ m^λ n≤ j≤ n^κ(u_ij-u_in)≥ 0, which was discussed by Kwee <cit.> and Móricz <cit.> for the logarithmic summability of single sequences in different ways. In addition, conditions (<ref>) correspond to conditions mlog (m+1)Δ_10u_mn=O_L(1) and nlog (n+1)Δ_01u_mn=O_L(1). As a result of our main Theorem <ref> and Theorem <ref>, the mentioned conditions are sufficient Tauberian conditions for the logarithmic summability of double sequences of real numbers. Now, we present the proofs of our main results for double sequences of real numbers. Proof of Theorem <ref>. Assume that (u_mn) being (N,p,q) summable to ℓ is slowly decreasing relative to (P_m) and (Q_n) and slowly decreasing relative to (Q_n) in the strong sense. In order to prove that (u_mn) is P-convergent to same number, we indicate that difference u_mn-σ_mn is P-convergent to 0. By means of Lemma <ref>, we have for μ> m and η>n u_mn-σ_mn = P_μQ_η/(P_μ-P_m)(Q_η-Q_n)(σ_μη-σ_μ n-σ_mη+σ_mn) + P_μ/P_μ-P_m(σ_μ n-σ_mn)Q_η/Q_η-Q_n(σ_mη-σ_mn) - -frac1(P_μ-P_m)(Q_η-Q_n)∑_i=m+1^μ∑_j=n+1^ηp_iq_j(u_ij-u_mn) ≤ P_μQ_η/(P_μ-P_m)(Q_η-Q_n)(σ_μη-σ_μ n-σ_mη+σ_mn) + P_μ/P_μ-P_m(σ_μ n-σ_mn)+Q_η/Q_η-Q_n(σ_mη-σ_mn) - min_m≤ i≤μ n≤ j≤η(u_ij-u_in)-min_m≤ i≤μ(u_in-u_mn). Putting μ={P_i≥(1+δ/2) P_m }=min{i> m:P_i≥(1+δ/2) P_m} and η={Q_j≥(1+γ/2) Q_n}=min{j> n: Q_j≥(1+γ/2) Q_n} with δ, γ>0, we get μ≥ m, η≥ n and P_μ-1<(1+δ/2) P_m and Q_η-1<(1+γ/2) Q_n. Moreover, on account of (p_m), (q_n)∈ SVA_reg(α), we have P_m/P_m-1≤ 1+δ/4 and Q_n/Q_n-1≤ 1+γ/4 for sufficiently large m, n. By means of Lemma <ref>, it can be easily seen that p_m/P_m→ 0 and q_n/Q_n→ 0 as m, n→∞, as well. As a result of these findings, we obtain by Lemma <ref> that P_μ=P_μ/P_μ-1P_μ-1≤(1+δ/4)P_m(1+δ/2)≤ P_m(1+δ)=λ P_m, or in another saying, P_μ/P_m=P_μ-1/P_m+p_μ/P_m≤λ+p_μ/P_μP_μ/P_m=λ(1+o(1)) and P_μ/P_m≥(1+δ/2)=λ+1/2 and Q_η=Q_η/Q_η-1Q_η-1≤(1+γ/4)Q_n(1+γ/2)≤ Q_n(1+γ)=κ Q_n or in another saying, Q_η/Q_n=Q_η-1/Q_n+q_η/Q_n≤κ+q_η/Q_ηQ_η/Q_n=κ(1+o(1)) and Q_η/Q_n≥(1+γ/2)=κ+1/2 for λ, κ>1. From the assumption, we have σ_mn→ℓ and so, σ_ij-σ_mn→ 0 for i=m or μ and j=n or η as m, n→∞. Since the sequences (P_m) and (Q_n) are strictly increasing sequences, we attain from (<ref>) that u_mn-σ_mn ≤ P_μQ_η/(P_μ-P_m)(Q_η-Q_n)(σ_μη-σ_μ n-σ_mη+σ_mn) + P_μ/P_μ-P_m(σ_μ n-σ_mn)+Q_η/Q_η-Q_n(σ_mη-σ_mn) - min_P_m≤ P_i≤λ P_m Q_n≤ Q_j≤κ Q_n(u_ij-u_in)-min_P_m≤ P_i≤λ P_m(u_in-u_mn) ≤ 4λκ+2λ(κ-1)+2(λ-1)κ/(λ-1)(κ-1)(1+o(1)) o(1) - min_P_m≤ P_i≤λ P_m Q_n≤ Q_j≤κ Q_n(u_ij-u_in)-min_P_m≤ P_i≤λ P_m(u_in-u_mn) where P_μ/P_μ-P_m=P_μ/P_m/P_μ/P_m-1≤2λ/(λ-1)(1+o(1)) and Q_η/Q_η-Q_n=Q_η/Q_n/Q_η/Q_n-1≤2κ/(κ-1)(1+o(1)). If we get lim sup of both sides of inequality (<ref>) as m, n→∞, then we reach for any λ, κ>1 lim sup_m, n→∞ (u_mn-σ_mn)≤ -lim inf_m, n→∞min_P_m≤ P_i≤λ P_m Q_n≤ Q_j≤κ Q_n(u_ij-u_in) -lim inf_m, n→∞min_P_m≤ P_i≤λ P_m(u_in-u_mn). If we get limit of both sides of last inequality as λ, κ→ 1^+, then we find lim sup_m, n→∞ (u_mn-σ_mn)≤ 0 due to the fact that (u_mn) is slowly decreasing relative to (P_m) and (Q_n) and slowly decreasing relative to (Q_n) in the strong sense. Following a similar procedure to above for μ̃< m and η̃<n, we have by means of Lemma <ref> u_mn-σ_mn = P_μ̃Q_η̃/(P_m-P_μ̃)(Q_n-Q_η̃)(σ_mn-σ_μ̃ n-σ_mη̃+σ_μ̃η̃) + P_μ̃/P_m-P_μ̃(σ_mn-σ_μ̃ n)+Q_η̃/Q_n-Q_η̃(σ_mn-σ_mη̃) + 1/(P_m-P_μ̃)(Q_n-Q_η̃)∑_i=μ̃+1^m∑_j=η̃+1^np_iq_j(u_mn-u_ij) ≥ P_μ̃Q_η̃/(P_m-P_μ̃)(Q_n-Q_η̃)(σ_mn-σ_μ̃ n-σ_mη̃+σ_μ̃η̃) + P_μ̃/P_m-P_μ̃(σ_mn-σ_μ̃ n)+Q_η̃/Q_n-Q_η̃(σ_mn-σ_mη̃) + min_μ̃≤ i≤ m(u_mn-u_in)+min_μ̃≤ i≤ m η̃≤ j≤ n(u_in-u_ij). Putting μ̃={P_m≥(1+δ/2)P_i }=max{m>i: P_m≥(1+δ/2)P_i } and η̃={Q_n≥(1+γ/2)Q_j}=max{n>j : Q_n≥(1+δ/2)Q_j} with δ, γ>0, we get μ̃≤ m, η̃≤ n and (1+δ/2)P_μ̃+1>P_m and (1+γ/2) Q_η̃+1>Q_n. Moreover, on account of (p_m), (q_n)∈ SVA_reg(α), we have P_μ̃+1/P_μ̃≤ 1+δ/4 and Q_η̃+1/Q_η̃≤ 1+γ/4 for sufficiently large μ̃, η̃. As a result of these findings, we obtain by Lemma <ref> that P_μ̃=P_μ̃/P_μ̃+1P_μ̃+1≥1/(1+δ/4)1/(1+δ/2)P_m≥1/(1+δ/2)^2P_m=λ̃P_m and P_μ̃/P_m≤1/(1+δ/2)=√(λ̃) and Q_η̃=Q_η̃/Q_η̃+1Q_η̃+1≥1/(1+γ/4)1/(1+γ/2)Q_n≥1/(1+γ/2)^2Q_n=κ̃Q_n and Q_η̃/Q_n≤1/(1+γ/2)=√(κ̃) for 0<λ̃, κ̃<1. From the assumption, we have σ_mn→ℓ and so, σ_mn-σ_ij→ 0 for i=m or μ̃ and j=n or η̃ as μ̃, η̃→∞. Since the sequences (P_m) and (Q_n) are strictly increasing sequences, we attain from (<ref>) that u_mn-σ_mn ≥ P_μ̃Q_η̃/(P_m-P_μ̃)(Q_n-Q_η̃)(σ_mn-σ_μ̃ n-σ_mη̃+σ_μ̃η̃) + P_μ̃/P_m-P_μ̃(σ_mn-σ_μ̃ n)+Q_η̃/Q_n-Q_η̃(σ_mn-σ_mη̃) + min_λ̃ P_m<P_i≤ P_m(u_mn-u_in)+min_λ̃ P_m≤ P_i≤ P_m κ̃ Q_n≤ Q_j≤ Q_n(u_in-u_ij) ≥ λ̃+κ̃-λ̃κ̃/(1-λ̃)(1-κ̃)o(1)+min_λ̃ P_m<P_i≤ P_m(u_mn-u_in) + min_λ̃ P_m≤ P_i≤ P_m κ̃ Q_n≤ Q_j≤ Q_n(u_in-u_ij) where P_μ̃/P_m-P_μ̃=P_μ̃/P_m/1-P_μ̃/P_m≥λ̃/1-λ̃ and Q_η̃/Q_n-Q_η̃=Q_η̃/Q_n/1-Q_η̃/Q_n≥κ̃/1-κ̃, for 1/λ=λ̃, 1/κ=κ̃, and 0<λ̃, κ̃<1. If we get lim inf of both sides of inequality (<ref>) as μ̃, η̃→∞, then we reach for any 0<λ̃, κ̃<1 lim inf_m, n→∞ (u_mn-σ_mn)≥lim inf_m, n→∞min_λ̃ P_m<P_i≤ P_m(u_mn-u_in) +lim inf_m, n→∞min_λ̃ P_m≤ P_i≤ P_m κ̃ Q_n≤ Q_j≤ Q_n(u_in-u_ij). If we get limit of both sides of last inequality as λ̃, κ̃→ 1^-, then we find lim inf_m, n→∞ (u_mn-σ_mn)≥ 0 due to the fact that (u_mn) is slowly decreasing relative to (P_m) and (Q_n) and slowly decreasing relative to (Q_n) in the strong sense. If we combine inequalities (<ref>) with (<ref>), we conclude lim_m, n→∞ (u_mn-σ_mn)= 0, which means that (u_mn) is P-convergent to ℓ. Proof of Theorem <ref>. Assume that (u_mn) being (N,p,q) summable to ℓ satisfies conditions (<ref>). If we indicate that conditions (<ref>) imply slow decrease condition relative to (P_m) and (Q_n) and slow decrease condition relative to (Q_n) in the strong sense, then we could this proof with the help of Theorem <ref>. Putting μ={P_i≥(1+δ/2) P_m } and η={Q_j≥(1+γ/2) Q_n} with δ, γ>0, we can observe inequalities (<ref>) and (<ref>). Then, we have for n_0≤ m≤ i≤μ and n_0≤ n u_in-u_mn=∑_k=m+1^iΔ_10u_kn≥ -M_1∑_k=m+1^ip_k/P_k≥ -M_1(P_μ/P_m-1) ≥ -M_1(λ-1+λ o(1)) for any constant M_1>0 and λ>1. If we get lim inf and limit of both sides of last inequality as m, n→∞ and λ→ 1^+ respectively, then we reach lim_λ→ 1^+lim inf_m,n→∞min_P_m≤ P_i≤λ P_m(u_in-u_mn)≥0, which means that (u_mn) is slowly decreasing relative to (P_m). Similarly, we obtain n_0≤ n≤ j≤η and n_0≤ m u_mj-u_mn=∑_r=n+1^jΔ_01u_mr≥ -M_2∑_r=n+1^jq_r/Q_r≥ -M_2(Q_η/Q_n-1) ≥ -M_2(κ-1+κ o(1)) for any constant M_2>0 and κ>1. If we get lim inf and limit of both sides of last inequality as m, n→∞ and κ→ 1^+ respectively, then we reach lim_κ→ 1^+lim inf_m,n→∞min_Q_n≤ Q_j≤κ Q_n(u_mj-u_mn)≥0, which means that (u_mn) is slowly decreasing relative to (Q_n). Therefore, we conclude with the help of Theorem <ref> that (u_mn) is P-convergent to ℓ. § MAIN RESULTS FOR THE (N, P, Q) SUMMABLE DOUBLE SEQUENCES OF COMPLEX NUMBERS In parallel with the previous section, this section is also constructed by considering the following headings for (N,p,q) summable double sequences of complex numbers: (a) Determining certain subsets 𝒯{u_mn} of double sequence space and certain subclasses 𝒞{p_m, q_n} of single sequence space having the property: “Under conditions 𝒞{p_m, q_n}, a double sequence (u_mn)∈𝒯{u_mn} being (N,p,q) summable to l∈ℝ is also P-convergent to the same value.” (b) Demonstrating with examples how the subsets 𝒯{u_mn} and the subclasses 𝒞{p_m, q_n} change for special means occurring depends on choosing of weights sequence (p_m) and (q_n). (c) Proving the property given in (a) for the related subsets and subclasses. Here, we formulate our main results for double sequences of complex numbers as follows: Let (p_m), (q_n)∈ SVA_reg(α) and a double sequence (u_mn) be (N,p,q) summable to a number ℓ. If (u_mn) is slowly oscillating relative to (P_m), slowly oscillating relative to (Q_n), and slowly oscillating relative to (P_m) or (Q_n) in the strong sense, then (u_mn) is P-convergent to ℓ. Let (p_m), (q_n)∈ SVA_reg(α) and a double sequence (u_mn) be (N,p,q) summable to a number ℓ. If (u_mn) satisfies conditions P_m/p_mΔ_10u_mn=O(1) and Q_n/q_nΔ_01u_mn=O(1), then (u_mn) is P-convergent to ℓ. Some means occurring depends on choosing of weight sequences (p_m) and (q_m) are listed with their corresponding subsets 𝒯{u_mn} and subclasses 𝒞{p_m, q_n} for double sequences of complex numbers as follows. (i) In case p_m=q_n=1, it leads to the (C,1,1) means of a double sequence where P_mQ_n=(m+1)(n+1) for all m, n∈ℕ. Under the circumstances, conditions in Theorem <ref> correspond to slow oscillation conditions in senses (1,0), (0,1) and in the strong sense (1,0) or (0,1), besides conditions (<ref>) correspond to conditions mΔ_10u_mn=O(1) and nΔ_01u_mn=O(1), however slow oscillation condition in sense (1,1) and condition mnΔ_11 u_mn=O(1) discussed by Móricz <cit.> therein are superfluous. As a result of our main Theorem <ref> and Theorem <ref>, the mentioned conditions are sufficient Tauberian conditions for the (C,1,1) summability of double sequences of complex numbers. (ii) In case p_mq_n=1/(m+1)(n+1), it leads to the logarithmic means of a double sequence where P_mQ_n∼log mlog n for all m, n∈ℕ. Under the circumstances, conditions in Theorem <ref> correspond to conditions of slow oscillation with respect to summability (L,1) in sense (1,0), (0,1) and the strong sense (1,0) or (0,1), i.e., lim_λ→ 1^+lim sup_m,n→∞max_m≤ i≤ m^λ|u_in-u_mn|=0, lim_κ→ 1^+lim sup_m,n→∞max_n≤ j≤ n^κ|u_mj-u_mn|=0, and shall we say lim_λ, κ→ 1^+lim sup_m,n→∞max_m≤ i≤ m^λ n≤ j≤ n^κ|u_ij-u_in|= 0, which was discussed by Kwee <cit.> and Móricz <cit.> for the logarithmic summability of single sequences in different ways. In addition, conditions (<ref>) correspond to conditions mlog (m+1)Δ_10u_mn=O(1) and nlog (n+1)Δ_01u_mn=O(1). As a result of our main Theorem <ref> and Theorem <ref>, the mentioned conditions are sufficient Tauberian conditions for the logarithmic summability of double sequences of complex numbers. Now, we present the proofs of our main results for double sequences of complex numbers. Proof of Theorem <ref>. Assume that (u_mn) being (N,p,q) summable to ℓ is slowly oscillating relative to (P_m) and (Q_n) and slowly oscillating relative to (Q_n) in the strong sense. In order to prove that (u_mn) is P-convergent to same number, we indicate that the difference |u_mn-σ_mn| is P-convergent to 0. By means of Lemma <ref>, we have for μ> m and η>n |u_mn-σ_mn| ≤ P_μQ_η/(P_μ-P_m)(Q_η-Q_n)|σ_μη-σ_μ n-σ_mη+σ_mn| + P_μ/P_μ-P_m|σ_μ n-σ_mn|+Q_η/Q_η-Q_n|σ_mη-σ_mn| + 1/(P_μ-P_m)(Q_η-Q_n)∑_i=m+1^μ∑_j=n+1^ηp_iq_j|u_ij-u_mn| ≤ P_μQ_η/(P_μ-P_m)(Q_η-Q_n)|σ_μη-σ_μ n-σ_mη+σ_mn| + P_μ/P_μ-P_m|σ_μ n-σ_mn|+Q_η/Q_η-Q_n|σ_mη-σ_mn| + max_m≤ i≤μ n≤ j≤η(u_ij-u_in)+max_m≤ i≤μ(u_in-u_mn). Putting μ={P_i≥(1+δ/2) P_m } and η={Q_j≥(1+γ/2)Q_n} with δ, γ>0, we get μ≥ m, η≥ n and P_μ-1<(1+δ/2) P_m and Q_η-1<(1+γ/2) Q_n. Moreover, on account of (p_m), (q_n)∈ SVA_reg(α), we have P_m/P_m-1≤ 1+δ/4 and Q_n/Q_n-1≤ 1+γ/4 for sufficiently large m, n. By means of Lemma <ref>, it can be easily seen that p_m/P_m→ 0 and q_n/Q_n→ 0 as m, n→∞, as well. As a result of these findings, we obtain by Lemma <ref> that the inequalities (<ref>)-(<ref>) hold true for λ, κ>1. From the assumption, we have σ_mn→ℓ and so, σ_ij-σ_mn→ 0 for i=m or μ and j=n or η as m, n→∞. Since the sequences (P_m) and (Q_n) are strictly increasing sequences, we attain from (<ref>) that |u_mn-σ_mn| ≤ P_μQ_η/(P_μ-P_m)(Q_η-Q_n)|σ_μη-σ_μ n-σ_mη+σ_mn| + P_μ/P_μ-P_m|σ_μ n-σ_mn|+Q_η/Q_η-Q_n|σ_mη-σ_mn| + max_P_m≤ P_i≤λ P_m Q_n≤ Q_j≤κ Q_n|u_ij-u_in|+max_P_m≤ P_i≤λ P_m|u_in-u_mn| ≤ 4λκ+2λ(κ-1)+2(λ-1)κ/(λ-1)(κ-1)(1+o(1)) o(1) + max_P_m≤ P_i≤λ P_m Q_n≤ Q_j≤κ Q_n|u_ij-u_in|+max_P_m≤ P_i≤λ P_m|u_in-u_mn| where P_μ/P_μ-P_m=P_μ/P_m/P_μ/P_m-1≤2λ/(λ-1)(1+o(1)) and Q_η/Q_η-Q_n=Q_η/Q_n/Q_η/Q_n-1≤2κ/(κ-1)(1+o(1)). If we get lim sup of both sides of inequality (<ref>) as m, n→∞, then we reach for any λ, κ>1 lim sup_m, n→∞ |u_mn-σ_mn|≤lim sup_m, n→∞max_P_m≤ P_i≤λ P_m Q_n≤ Q_j≤κ Q_n|u_ij-u_in| +lim sup_m, n→∞max_P_m≤ P_i≤λ P_m|u_in-u_mn|. If we get limit of both sides of last inequality as λ, κ→ 1^+, then we find lim sup_m, n→∞ |u_mn-σ_mn|≤ 0 due to the fact that (u_mn) is slowly oscillating relative to (P_m) and (Q_n) and slowly oscillating relative to (Q_n) in the strong sense. Therefore, we conclude that (u_mn) is P-convergent to ℓ. Proof of Theorem <ref>. Assume that (u_mn) being (N,p,q) summable to ℓ satisfies conditions (<ref>). If we indicate that conditions (<ref>) imply slow oscillation condition relative to (P_m) and (Q_n) and slow oscillation condition relative to (Q_n) in the strong sense, then we could this proof with the help of Theorem <ref>. Putting μ={P_i≥(1+δ/2) P_m } and η={Q_j≥(1+γ/2) Q_n} with δ, γ>0, we can observe inequalities (<ref>) and (<ref>). Then, we have for n_0≤ m≤ i≤μ and n_0≤ n |u_in-u_mn|≤∑_k=m+1^i|Δ_10u_kn|≤ M_1∑_k=m+1^ip_k/P_k≤ M_1(P_μ/P_m-1) ≤ M_1(λ-1+λ o(1)) for any constant M_1>0 and λ>1. If we get lim sup and limit of both sides of last inequality as m, n→∞ and λ→ 1^+ respectively, then we reach lim_λ→ 1^+lim sup_m,n→∞max_P_m≤ P_i≤λ P_m|u_in-u_mn|=0, which means that (u_mn) is slowly oscillating relative to (P_m). Similarly, we obtain n_0≤ n≤ j≤η and n_0≤ m |u_mj-u_mn|≤∑_r=n+1^j|Δ_01u_mr|≤ M_2∑_r=n+1^jq_r/Q_r≤ M_2(Q_η/Q_n-1) ≤ M_2(κ-1+κ o(1)) for any constant M_2>0 and κ>1. If we get lim sup and limit of both sides of last inequality as m, n→∞ and κ→ 1^+ respectively, then we reach lim_κ→ 1^+lim sup_m,n→∞max_Q_n≤ Q_j≤κ Q_n|u_mj-u_mn|=0, which means that (u_mn) is slowly oscillating relative to (Q_n). Therefore, we conclude with the help of Theorem <ref> that (u_mn) is P-convergent to ℓ. § CONCLUSION In this paper, we extended some theorems given for (N,p) summable sequences to (N,p,q) summable double sequences. We determined that conditions needed for P-limσ_mn(u)=ℓ to be P-lim u_mn=ℓ are slow decrease (or oscillation) relative to (P_m) and (Q_n) and slow decrease (or oscillation) relative to (P_m) or (Q_n) in the strong sense under some additional condition imposed on (p_m), (q_n). In the sequel, we presented a O_L-type (or O-type) Tauberian condition for (N,p,q) summable sequences. Through this paper, we gave an answer to question (4) which was left as an open problem by Stadtmüller and Baron in <cit.>. In addition to these, we indicate that our results include all of the classical Tauberian theorems for double sequences which P-convergence follows from Cesàro and logarithmic summability under slow decrease (or oscillation) conditions relative to Schmidt and slow decrease (or oscillation) conditions relative to logarithmic summability in certain senses. § ACKNOWLEDGMENT The authors would like to thank Ulrich Stadtmüller for sincerely answering our questions in reference to the problems which we encountered while conducting this study. § DECLARATIONS Conflict of interest The authors have no competing interest to declare that are relevant to the content of this article. Funding The first author in this research is supported by The Scientific and Technological Research Council of Turkey (TUBITAK) under 2218 - National Postdoctoral Research Fellowship Program (Grant Agreement No. 118C577). The other authors did not receive support from any organization for the submitted work. Availability of data and materials Not applicable. Code availability Not applicable. Ethics approval Not applicable. 9 fekete Á. Fekete, Tauberian conditions for double sequences that are statistically summable by weighted means, Sarajevo J. Math. 1(14)(2)(2005) 197–210. pringsheim A. Pringsheim, Zur Theorie der zweifach unendlichen Zahlenfolgen, Math. Ann. 53(3) (1900) 289–321. belen C. Belen, Some Tauberian theorems for weighted means of bounded double sequences, An. Ştiinţ. Univ. Al. I. Cuza Iaşi. Mat. (N.S.) 63(1) (2017) 115–122. kwee B. Kwee, A Tauberian theorem for the logarithmic method of summation, Proc. Cambridge Philos. Soc. 63 (1966) 401–405. chenhsu C-P. Chen, J-M. Hsu, Tauberian theorems for weighted means of double sequences, Anal. Math. 26(4) (2000) 243–262. moricz1994 F. Móricz, Tauberian theorems for Cesàro summable double sequences, Studia Math. 110(1) (1994) 83–96. moricz2013 F. Móricz, Necessary and sufficient Tauberian conditions for the logarithmic summability of functions and sequences, Studia Math. 219(2) (2013), 109–121. moriczstadtmuller F. Móricz, U. Stadtmüller, Summability of double sequences by weighted mean methods and Tauberian conditions for convergence in Pringsheim's sense, Int. J. Math. Math. Sci. 65-68 (2004) 3499–3511. hardy G. H. Hardy, On the convergence of certain multiple series, Proc. London Math. Soc. (2) 1 (1904) 124–128. hamilton H. J. Hamilton, Transformations of multiple sequences, Duke Math. J. 2(1) (1936) 29–60. boos J. Boos, Classical and modern methods in summability, Oxford Mathematical Monographs, Oxford University Press, Oxford, 2000. karamata J. Karamata, Sur un mode de croissance régulière. Théorèmes fondamentaux, Bull. Soc. Math. France 61 (1933) 55–62. regularvariation N. H. Bingham, C. M. Goldie, J. L. Teugels, Regular variation, no. 27, Cambridge University Press, 1989. bojanicseneta R. Bojanic, E. Seneta, A unified theory of regularly varying sequences, Math. Z. 134 (1973) 91–106. baronstadtmuller S. Baron, U. Stadtmüller, Tauberian theorems for power series methods applied to double sequences, J. Math. Anal. Appl. 211(2) (1997) 574–589. bromwich T. J. I'A. Bromwich, An Introduction to the Theory of Infinite Series, MacMillan & Co. Ltd., London, 1908. stadtmuller2 U. Stadtmüller, Tauberian theorems for weighted means of double sequences, Anal. Math. 25(1) (1999) 57–68.
http://arxiv.org/abs/2307.01948v1
20230704225545
A Catalogue of Radio Supernova Remnants and Candidate Supernova Remnants in the EMU/POSSUM Galactic Pilot Field
[ "Brianna D. Ball", "Roland Kothes", "Erik Rosolowsky", "Jennifer West", "Werner Becker", "Miroslav D. Filipović", "B. M. Gaensler", "Andrew M. Hopkins", "Bärbel Koribalski", "Tom Landecker", "Denis Leahy", "Joshua Marvil", "Xiaohui Sun", "Filomena Bufano", "Ettore Carretti", "Adriano Ingallinera", "Cameron L. Van Eck", "Tony Willis" ]
astro-ph.GA
[ "astro-ph.GA" ]
firstpage–lastpage Instantaneous Wireless Robotic Node Localization Using Collaborative Direction of Arrival Ehsan Latif and Ramviyas Parasuraman^* School of Computing, University of Georgia, Athens, GA 30602, USA. ^* Corresponding Author Email: ramviyas@uga.edu. August 1, 2023 ================================================================================================================================================================= We use data from the pilot observations of the EMU/POSSUM surveys to study the "missing supernova remnant (SNR) problem", the discrepancy between the number of Galactic SNRs that have been observed and the number that are estimated to exist. The Evolutionary Map of the Universe (EMU) and the Polarization Sky Survey of the Universe's Magnetism (POSSUM) are radio sky surveys that are conducted using the Australian Square Kilometre Array Pathfinder (ASKAP). We report on the properties of 7 known SNRs in the joint Galactic pilot field, with an approximate longitude and latitude of 323^∘≤ l ≤ 330^∘ and -4^∘≤ b ≤ 2^∘ respectively, and identify 21 SNR candidates. Of these, 4 have been previously identified as SNR candidates, 3 were previously listed as a single SNR, 13 have not been previously studied, and 1 has been studied in the infrared. These are the first discoveries of Galactic SNR candidates with EMU/POSSUM and, if confirmed, they will increase the SNR density in this field by a factor of 4. By comparing our SNR candidates to the known Galactic SNR population, we demonstrate that many of these sources were likely missed in previous surveys due to their small angular size and/or low surface brightness. We suspect that there are SNRs in this field that remain undetected due to limitations set by the local background and confusion with other radio sources. The results of this paper demonstrate the potential of the full EMU/POSSUM surveys to uncover more of the missing Galactic SNR population. ISM: supernova remnants – radio continuum: general – catalogues – Galaxy: general § INTRODUCTION Supernovae and supernova remnants (SNRs) are the most significant sources of chemical enrichment in the interstellar medium (ISM) of our Galaxy. More than half of the material in the Milky Way has been processed by supernovae and their remnants <cit.>. Thus, our knowledge of the Galaxy and its evolution is necessarily informed by our understanding of the Galactic SNR population. In this paper we seek to investigate the so-called “missing supernova remnant problem,” which refers to the discrepancy between the number of SNRs that are believed to exist in our Galaxy and the number that have been discovered <cit.>. The exact size of the discrepancy is unknown, as accurately quantifying this problem is challenging due to variations in SNR density and radio visibility across the Galactic plane. Based on observations of extra-galactic supernovae, we know that in galaxies like ours a supernova should occur every 30 to 50 years <cit.>. We can combine this rate with the expected SNR radio lifetime to obtain an estimate of the number of SNRs that should be detectable at radio wavelengths. The radio-visible lifetime of a supernova remnant is difficult to estimate, however, as it likely varies based on local conditions and depends on the frequency of the observations. By studying associations between supernova remnants and pulsars, <cit.> estimated the mean radio SNR lifetime to be about 60,000 years. More recent work has shown that the majority of observed remnants in the Milky Way and Local Group galaxies are believed to be in the Sedov-Taylor (S-T) phase of evolution <cit.>. Thus, in many cases the S-T lifetime may serve as a useful proxy for the radio visible lifetime with a characteristic timescale of 20 - 80 kyrs dependent on the local ISM density <cit.>. However, adopting these age limits may result in estimates which are too conservative. Around 25% of known Galactic SNRs with age estimates are believed to be older than 20 kyrs <cit.>. Additionally, there have been discoveries of Galactic SNRs, observable at radio wavelengths, that are well beyond the S-T phase <cit.>. In comparing the <cit.> model predictions to observations, <cit.> found the model to be insufficient in reproducing observed radio emission. According to <cit.>, predictions for the total number of SNRs in the Galaxy should generally be >1000, so we adopt this as a lower limit. We form a conservative upper limit based on the supernova rate and S-T lifetime and estimate that at any given time, 1000 to 2700 radio supernova remnants should be detectable in our Galaxy. So far we have only discovered somewhere in the range of 300 to 400 <cit.>. We aim to detect some of these missing SNRs and, by studying their properties, gain further insight into the nature of this discrepancy. The majority of supernova remnants (approximately 95%) discovered in our Galaxy have been detected in the radio <cit.>. Thus, radio observations play an important role in the search for Galactic SNRs. Because of the limitations in working with radio data due to the relatively poor angular resolution and sensitivity when compared to observations at shorter wavelengths, sources with a small angular size and/or low surface brightness are more likely to be missed. It is therefore reasonable to expect that these types of sources may comprise a significant portion of the missing SNR population. This is especially true in regions of the Galaxy where radio emission is dominated by thermal sources, such as HII regions, and distinguishing SNRs becomes more difficult. X-ray observations of young Galactic SNRs are believed to be fairly complete with an implied Galactic SNR birth rate of ∼1/35 years, consistent with the supernova explosion rate <cit.>. Thus, we mostly expect to find old, faint SNRs of varying angular sizes, dependent on the distance to the source and the local environment. Confusion with other extended radio sources, particularly HII regions, presents a significant challenge to confidently identifying SNR candidates. To address this, we adapt a commonly used methodology involving the comparison of radio and mid-infrared (MIR) fluxes. This technique has been used by many other Galactic SNR surveys, as well as in follow up studies of SNR candidates <cit.>. While SNRs and HII regions can have similar radio morphologies, HII regions produce strong MIR emission from warm dust and polycyclic aromatic hydrocarbons (PAHs). Conversely, SNRs have been found to produce little to no MIR emission <cit.>. The absence of an MIR counterpart can therefore be used as evidence that a potential SNR candidates is not an HII region. The Evolutionary Map of the Universe (EMU) and the Polarization Sky Survey of the Universe's Magnetism (POSSUM) are radio surveys that will be observed together with the Australian Square Kilometre Array Pathfinder (ASKAP). Because of the improved resolution and sensitivity when compared to previous southern sky radio surveys, such as the MGPS-2 <cit.>, the EMU/POSSUM surveys should be expected to uncover some of these small and faint sources. Additionally, ASKAP's large field of view and good uv-coverage should allow for the detection of old SNRs that are large with low surface brightness. Here we utilize data from the pilot observations of these surveys to search for supernova remnants within a small field of the Galactic plane. In this paper, we aim to (1) validate the quality of the EMU/POSSUM Galactic pilot field data by studying the properties of known supernova remnants in the field, (2) identify new supernova remnant candidates and uncover some of the missing SNR population, and (3) develop analysis techniques that can be used to study supernova remnants and search for new candidates with the full EMU/POSSUM sky survey data as they become available. Descriptions of the data used in this paper can be found in Section <ref>. In Section <ref> we present the data and describe how SNR candidates were identified. We also discuss the known SNRs in the field and their properties. In Section <ref> we present our SNR candidates and in Section <ref> we provide some analysis and comparison to the known Galactic SNR population. The conclusions are summarized in Section <ref>. § METHODS AND OBSERVATIONS §.§ The EMU and POSSUM Observations with ASKAP We use data from the ASKAP telescope <cit.>, an interferometer with 36 12-meter dishes equipped with phased-array feeds (PAFs). These data were obtained during the commissioning phase of the telescope, specifically from the second pilot observations of the commensal EMU <cit.> and POSSUM surveys <cit.>. These observations use a full 10-hour track having full Stokes with 288 MHz bandwidth, centred at 933 MHz. Imaging is performed using ASKAPsoft and the standard commensal EMU/POSSUM imaging parameter set <cit.>, which produces both a multi-frequency synthesis (MFS) band-averaged Stokes I image for the EMU survey, and full Stokes I, Q, U, and V frequency cubes with 1 MHz channels for POSSUM. The cubes have been convolved to a common resolution of 18” across all PAF beams and all frequency channels. The observations for this particular pilot II survey field were observed on 06 November 2021 (Scheduling block 33284). Primary beam correction in all Stokes parameters are performed using beam models derived from standard observatory holography observations from 20 June 2021 (Scheduling block 28162). The long period of time between the holography and the observations resulted in a poor leakage correction, and therefore this field was re-observed on 07 September 2022 (SB 43773). This field was corrected using holography observations from 28 July 2022 (SB 43057). This improved correction mitigates leakage from Stokes I into Stokes Q and U at around the 1% level or less over most of the field. The top image in Figure <ref> shows the Stokes I image from the original pilot II observation and the bottom image shows the same field in polarized intensity (PI). The polarized intensity was taken from the peak of the Faraday depth (FD) function for each pixel. For the known supernova remnants and our candidates, we did not use the FD cube from the POSSUM pipeline, but calculated them ourselves. We used the Q and U data cubes from scheduling block 43773. Instead of a Fourier transform, we de-rotated the Q and U data in each frequency channel for each rotation measure (RM). We probed an RM range from -2000 to +2000 rad m^-2 with a step size of 1 rad m^-2. A sample FD function is shown in Figure <ref>. This was taken towards the pulsar PSR J1551-5310 inside our SNR candidate G328.0+0.7. The RM for this pulsar is catalogued by <cit.> to be -1023.3 ± 6.3 rad m^-2. In our data we find RM = -1017 ± 5 rad m^-2. §.§ Ancillary Data In addition to the radio data from ASKAP, we utilize images of the same field of the Galactic plane from two other sky surveys. Comparing radio and MIR fluxes is a commonly used technique for identifying supernova remnants. Thus, we make use of 12 μm infrared data from WISE (Widefield Infrared Survey Explorer) <cit.> with an angular resolution of 6.5" as part of the candidate identification process, further outlined in Section <ref>. We use 12 μm data because it traces emission from hot dust and PAHs, both of which are expected to be abundant in HII regions <cit.> and largely destroyed in SNRs <cit.>. Pixel values in WISE data are measured in digital number (DN) units, which are designed for relative photometric measurements <cit.> and are sufficient for our purposes. We use low frequency (198 MHz) data from GLEAM (The GaLactic and Extragalactic All-sky MWA survey) <cit.> to calculate spectral indices for some remnants, the details of which are provided in Section <ref>. The angular resolution of the GLEAM data (∼169"×149") is relatively poor compared to the resolution of ASKAP so while we are able to calculate spectral indices for six of the known remnants, we can obtain indices with these data for only two of our candidates. §.§ Flux Integration To calculate total intensity flux densities, we use a combined map that was created by averaging data from the two ASKAP second pilot observations. This was done to minimize the effect of background fluctuations. Because of the complexity of the background, different methods for calculating flux densities and performing background subtraction were explored. Integrating over radial profiles using Karma software <cit.> did not allow us to properly account for variations in the background or surrounding bright sources. Attempting to use a circular aperture to define the source with multiple circular apertures defining the background, as done by <cit.>, presented similar challenges and lacked consistency. Ultimately, flux integration was performed using the Polygon_Flux software, which was developed by <cit.> for the GLEAM survey to deal with extended sources that have complicated backgrounds. The software calculates the flux density within a chosen region, subtracts user-selected point sources, and performs background subtraction, allowing the user to select surrounding regions that should not be included as part of the background. For each source, the calculations were performed multiple times in order to obtain a more accurate flux value and an error estimate. Three different definitions of the background were used to estimate the systematic uncertainty due to flux aperture definition. The calculations were run using backgrounds defined as 4 to 10, 10 to 16, and 16 to 22 pixels from the source (with a pixel size of 2"). Additionally, the source and background selection process was performed at least twice for each source to account for uncertainties resulting from the definition of the source perimeter. Thus, the flux density calculations were run at least six times per source. The flux densities in Table <ref> were taken as the median values of these calculations and the errors were determined by the range between the extrema. Instrumental uncertainties in the fluxes were found to be relatively insignificant compared to the systematic estimates and were not included. § RESULTS §.§ The EMU/POSSUM Galactic Pilot Field Figure <ref> shows the EMU/POSSUM Galactic pilot II data as observed by ASKAP and the same field in the mid-infrared as observed by WISE <cit.>. The field looks across the Galactic plane, with an approximate longitude and latitude of 323^∘≤ l ≤ 330^∘ and -4^∘≤ b ≤ 2^∘ respectively, along a tangent to the Norma arm and across several other spiral arms. This gives us a long line of sight through the inner Galaxy, up to distances of about 18 kpc. The annotations shown in Figure <ref> indicate the locations of the known SNRs (green) and HII regions (blue) within this field as well as the locations of our 21 SNR candidates (white). The known SNRs are taken from the <cit.> radio SNR catalogue and the HII regions come from the WISE Catalogue of Galactic HII regions <cit.>. There are 8 known SNRs in this field, including one that we believe should be reclassified as multiple sources (discussed further in Section <ref>). One of the known SNRs lies at the edge of the field and is only partially imaged in the combined map as shown. This field was selected in part because it can be broken into two regions that are visually and meaningfully distinct in both the radio and MIR. The upper left part of the field looks along a tangent to the Norma arm and thus we see a high density of HII regions and thermal emission. The lower right part of the field is noticeably fainter with less background emission and a lower density of HII regions. This allows us to test our ability to detect SNR candidates in each of these regions. Since we are primarily expecting to find low surface brightness sources, it is probable that there are candidates in the upper left region of the Galactic plane that we are unable to detect due to the high concentration of thermal emission. The locations of the candidates in Figure <ref> support this as they are clearly concentrated in areas with fewer HII regions and less background emission. §.§ SNR Identification and Verification A supernova remnant is formed in the aftermath of a stellar explosion as the ejected material expands into the ISM bounded by a supersonic shock wave that sweeps up interstellar material and magnetic fields as it travels. At the shock front, electrons are accelerated to relativistic speeds and interact with the magnetic field to produce highly linearly polarized synchrotron emission that is best observed in the radio <cit.>. A supernova remnant can typically be identified by the distinctive shell-like structure that is formed through this interaction between the supernova shock and the ISM. The structural evolution of the remnant will depend on factors like the characteristics of the explosion, the density of the surrounding ISM, and the ambient magnetic field <cit.>. While these factors may result in asymmetries, we generally expect to see well-defined rounded edges, produced by the shock, with fainter emission coming from the remnant's centre. Here we identify supernova remnant candidates primarily by looking for radio-emitting shell-like structures that lack clear mid-infrared counterparts. After identifying candidates, we attempt to find further evidence that they are supernova remnants. First, the presence of a young pulsar indicates that a supernova explosion has recently occurred so spatial coincidence of a candidate with this type of star can significantly increase our confidence in its classification. Second, radio emission from SNRs is primarily non-thermal synchrotron emission that can be differentiated from thermal emission using polarization and spectral indices. Non-thermal synchrotron emission is associated with a steep negative spectral index and linear polarization while thermal optically thin free-free emission is associated with a flat, unpolarized spectrum. As we are observing at relatively low radio frequencies, we can also expect to find compact HII regions with optically thick free-free emission and steep positive spectral indices. §.§.§ Radio and MIR Emission Comparing radio and MIR fluxes is a commonly used technique for distinguishing non-thermal SNR emission from thermally emitting sources like HII regions <cit.>. HII regions produce MIR emission primarily through stochastic heating of small dust grains and the vibrational and bending modes of polycyclic aromatic hydrocarbons (PAHs) <cit.>. Supernovae are known to produce significant amounts of dust and SNRs can also produce MIR emission through these thermal processes. However, this emission is relatively weak. This is in part because most dust (>90%) in SNRs is found in the dense, cool gas phase, rather than the X-ray emitting plasmas, resulting in a spectral energy distribution that peaks at longer infrared wavelengths than studied here <cit.>. Additionally, while the supernova shock can heat dust grains, it can also result in the destruction of a significant amount of dust and large molecules like PAHs <cit.>. According to <cit.>, only 2 - 20% of the initial dust mass survives the passage of the reverse shock, depending on the density of the surrounding ISM. Therefore, dust emission from SNRs is typically expected to be weak, if it is detectable at all, and importantly it has been shown that SNRs have significantly lower MIR to radio flux ratios when compared to HII regions <cit.>. Thus, while some SNRs do emit in the MIR, sources that lack MIR emission are unlikely to be HII regions. We began our search for SNR candidates by comparing the radio data from ASKAP to MIR data from WISE. The images used can be found in Figure <ref>. Because of the small size of the field, we were able to perform a detailed search by eye, specifically looking for shell-like structures in the radio that do not have MIR counterparts. This was done separately by two of the authors before comparing results. To simplify the search, we eliminated sources that had already been classified as HII regions. The WISE Catalogue of Galactic HII Regions <cit.> was used to identify all known HII regions in the field. The catalogue was made using 12 μm and 22 μm data from WISE and includes over 8000 Galactic HII regions and HII region candidates. We use Version 2.2 of the catalogue, downloaded from <http://astro.phys.wvu.edu/wise/>, and include all listed entries. We did not find any sources in our field that we believe to be HII regions that were not already part of the WISE HII region catalogue. Specifically, we did not find any new extended sources that were clearly visible in both radio and MIR. Thus, the radio to MIR comparison was not used to rule out any potential candidates but instead served primarily as evidence that our candidates are not HII regions. §.§.§ Spectral Indices Spectral indices can help to differentiate thermal and non-thermal emission. We assume the relation S(ν) ∝ν^α where S represents the flux density, ν represents the frequency, and α represents the spectral index. The synchrotron spectral index is determined by the power-law energy distribution of relativistic particles which are accelerated through multiple crossings at the shock front in a process known as diffusive shock acceleration (DSA) <cit.>. For a shell-type remnant, linear DSA predicts a spectral index of -0.5 and observations show that most catalogued Galactic SNRs have a spectral index within the range α = -0.5 ± 0.2 <cit.>. However, there is significant uncertainty in many of the measured values and outliers do exist. Thermally-emitting HII regions are expected to have flatter spectral indices for optically thin free-free emission, generally around -0.1, or steep inverse spectra, around +2, for optically thick. Pulsar wind nebulae (PWNe), or centre-filled supernova remnants, tend to have flatter spectra as well, usually within the range -0.3≤α≤0.0, though in rare cases they can be as steep as -0.7 <cit.>. PWNe accelerate relativistic particles through a different mechanism, the interaction of the pulsar wind with the supernova ejecta, which typically results in a flatter particle energy distribution. Thus, PWNe can be more difficult to distinguish from thermal sources when they do not have a visible shell component. This is not the only inherent bias against detecting these types of sources as they are also generally smaller and have a less visually distinct morphology. In fact, only 3% of catalogued Galactic SNRs are shell-less PWNe <cit.>. We calculate spectral indices using data from GLEAM <cit.> for sources that are large enough and bright enough to be detected in the highest frequency band of the GLEAM survey. These appear to be sources that are at least 5' in diameter with a flux density of around 1 Jy in the GLEAM band. This includes all of the known supernova remnants in the field but only two of our SNR candidates. The GLEAM flux densities were calculated using the method outlined in Section <ref>. These spectral indices can be found in Table <ref> with errors based on the uncertainties of the flux densities. Attempts were made to calculate in-band spectral indices with the ASKAP pilot II data but the uncertainties were too significant to produce meaningful results. Follow up observations at a second frequency would be valuable in allowing for the calculation of spectral indices for sources that are not visible in the GLEAM survey. §.§.§ Polarization Detection of linearly polarized radio emission is strong evidence that an extended Galactic radio nebula is a supernova remnant. SNRs emit highly linearly polarized synchrotron emission. The degree of polarization can be intrinsically more than 70%. For the SNR G181.1+9.5, in both the Effelsberg 5 GHz observations and 1.4 GHz observations with the synthesis telescope at the Dominion Radio Astrophysical Observatory (DRAO ST), <cit.> find polarization of about 70%. G181.1+9.5 is a highly evolved SNR with a highly compressed and very regular magnetic field in its shell. Conversely, young SNRs typically have much lower intrinsic degrees of polarization as they display significant turbulence in their expanding shells. The lowest polarization observed at a high radio frequency in an SNR is possibly the 2% observed in SNR G11.2-1.1 at a frequency of 32 GHz <cit.>. EMU and POSSUM observe at radio frequencies between 800 and 1087 MHz. At these low frequencies, Faraday rotation strongly affects polarized signals. There is foreground Faraday rotation in the magneto-ionic medium between us and the nebula and there may be internal effects inside the SNR's shell. In the SNR's shell we find a mix of synchrotron emitting and Faraday rotating plasmas, which means that internally the synchrotron emission may be affected by different amounts of Faraday rotation depending on where the emission comes from within the shell. Integrating this emission along the line of sight through the emission region may lead to significant depolarization. These effects become significantly worse at low frequencies as the amount of depolarization is inversely proportional to the frequency squared. Faraday rotation in the foreground ISM may also lead to depolarization, especially if the foreground path traverses turbulent ionized areas such as HII regions, or even spiral arms. Because we are observing at a relatively low frequency, failure to detect polarization should not be considered evidence that a candidate is not an SNR, especially considering that many of our candidates are small or faint sources that may be located at far distances across the Galactic plane. The probability of detecting polarization from our SNR candidates is higher for high latitude sources, as their foreground likely does not contain any turbulent ionized regions. SNRs located close to the plane of the Galaxy may suffer from foreground depolarization caused by overlapping HII regions in the foreground or parts of a spiral arm between the SNR and us as spiral arms contain enhanced electron densities and magnetic fields. Distinguishing between real and instrumental polarization is an additional challenge. As shown by the polarized intensity data in Figure <ref>, we see evidence of instrumental effects at the positions of known HII regions. Particularly for bright sources, we believe that polarization with a smooth structure that closely resembles what is seen in total power is potentially the result of Stokes I leakage. Polarization that has a speckled appearance is more likely to be real as this indicates a changing rotation measure, or intrinsic polarization angle, on small scales. For further evidence of real polarization, we look for structures that are contained entirely within the SNR and that appear similar in the polarized intensity and rotation measure maps. Real polarization from the SNR should be distinct from what is seen in the surrounding background in intensity and rotation measure. We detect polarized signal significantly above the noise from all known SNRs in our field, but we are not convinced that all of it is real. Further details for each source are provided in Section <ref>. Similarly to <cit.>, we are mostly unsuccessful in detecting polarization from Galactic SNR candidates. We are only able to detect what we believe to be real polarization from one of our candidates, G328.0+0.7. The details of this can be found in the candidate description in Section <ref>. §.§ Characteristics of Known Supernova Remnants As discussed in Section <ref>, supernova remnants can be identified by looking for extended radio sources that lack a mid-infrared counterpart. The top images in Figure <ref> show a known HII region that could potentially be misidentified as an SNR based solely on its radio morphology. The presence of a clear counterpart in the MIR helps to correctly identify this source as an HII region. The bottom images in Figure <ref> show a known SNR, G327.1-1.1, and its clear lack of MIR emission. A small HII region can be seen along the right edge of the SNR images in both the radio and the MIR, further illustrating this distinction. There are eight known supernova remnants in the EMU/POSSUM Galactic pilot II field that appear in the <cit.> catalogue. We believe G323.7-1.0 should be reclassified as three separate sources so it is discussed under the candidates section. Our observations of the other seven known SNRs are discussed here. The 933 MHz ASKAP images of the SNRs can be found in Figure <ref>. More detailed images of the known SNRs in both radio and MIR, which include coordinate axes and colour bars, can be found in Appendix <ref>. §.§.§ G323.5+0.1 This source is a shell-type supernova remnant that lies at the edge of our field. It is only fully imaged in the ASKAP data at low frequencies so it has not been studied here in-depth and thus does not appear in Table <ref>. Figure <ref> was made using a 48 MHz wide channel centred at 823.5 MHz since the source is not fully visible in the higher frequency channels. The source overlaps with a bright HII region that can be seen in MIR. §.§.§ G326.3-1.8, MSH 15-56 This SNR is a composite source, a bright pulsar wind nebula with a well-defined radio shell. It is the largest supernova remnant in the field with a size of 38'. Bright filamentary emission can be seen coming from the shell. The PWN is offset to the west of the remnant's centre and is elongated in the north-south direction (Figure <ref>). The source is estimated to be at a distance of 3.5-5.8 kpc <cit.>. There is clear polarization, mostly concentrated around and extending from the PWN (Figure <ref>). This can be seen in both the polarized intensity and rotation measure maps. There is also some polarization coming from the east side of the shell. We believe all of this polarization is real since it is not smooth and aligns with what is expected from total power without mirroring it exactly, which could indicate leakage. It also has a high negative RM and there is almost no polarized emission coming from the background in this part of the field it could be confused with. §.§.§ G327.1-1.1 This SNR is another composite source with a bright pulsar wind nebula and a relatively faint shell that is likely missing some short spacings in the ASKAP data. This is evidenced by the negative bowls of emission, seen in Figure <ref>. There is a bright HII region located to the west of the SNR. The remnant is believed to be located at a distance of 4.5-9 kpc <cit.>, indicating it is likely located within or near the Norma arm. There is obvious polarization coming from the PWN but no clear polarization coming from the shell (Figure <ref>). We believe this polarization to be real because it is contained entirely within the PWN and is not smooth. §.§.§ G327.2-0.1 This source is believed to be a shell-type remnant associated with a young magnetar, J1550-5418, located near its centre <cit.>. The magnetar has a characteristic age of 1.4 kyrs and a rotation measure of -1860± 20 rad/m^2 <cit.>. Distance estimates for the shell are between 4-5 kpc <cit.> while the magnetar has been estimated to be at a distance of 9 kpc <cit.>. The source lies on a larger filament of emission that is likely unrelated to the SNR (Figure <ref>). While we were not able to detect any clear real polarization coming from the shell due to confusion with the background, there does seem to be real polarization coming from the centre (Figure <ref>). Specifically, there are two peaks in the rotation measure for the central point source. We believe the first peak, around -1820 rad/m^2, comes from the pulsar and we speculate that the second peak, around 15-30 rad/m^2, may come from a previously undetected pulsar wind nebula. §.§.§ G327.4+0.4, Kes 27 This shell-type SNR exhibits multiple shell structures and many internal filaments. There is a small overlapping HII region that can be seen in both radio and MIR (Figure <ref>). HI absorption suggests the SNR is located at a distance of 4.3-5.4 kpc <cit.> while optical extinction suggests a distance of 2.8 kpc <cit.>. We detect polarization coming from several parts of the remnant (Figure <ref>). We believe the polarization seen along the northeast edge of the remnant to be real as it is distinct from the total power structure, and thus cannot be leakage, and has a high positive rotation measure that is distinct from the RM found in the rest of the image. The polarization in the southeast may be real as well but this is not certain as the smoother appearance and flatter RM may indicate that it is instrumental. There is also some polarization near the centre of the remnant that has a high negative RM and may be real but this is also unclear. §.§.§ G327.4+1.0 This source is an asymmetrical shell that is brightest along the northwestern edge. There is also some faint central emission with filaments that curve in the same direction as the shell (Figure <ref>). There is potential polarization coming from the centre of the remnant but it is not strong enough to be definitively distinct from the background features seen in the south (Figure <ref>). However, the positive RM seems to be mostly confined to the source with some extending to the north of the remnant. This extension is somewhat mirrored in total power, possibly indicating that some of this polarization is real but this is not conclusive. §.§.§ G328.4+0.2, MSH 15-57 This SNR is believed to be the largest and most radio luminous pulsar wind nebula in our galaxy <cit.>. It has no visible shell but there is a central bar structure that runs in the southeast to northwest direction (Figure <ref>). It is believed to be located at a distance of over 16.7 kpc <cit.> placing it along the outer edge of the Galaxy. The source appears to be polarized but because it is so bright we believe this is likely the result of leakage. In RM we see two components, a high negative component and a low positive component. The high negative component extends over the remnant and has a similar structure to what is seen in PI. The low positive component seems to be concentrated to the west side of the PWN (Figure <ref>). §.§ Spectral Indices of Known Remnants For each of the known SNRs, we calculate a spectral index using the 933 MHz flux density from ASKAP and the 198 MHz flux density value from GLEAM. These indices can be found in Table <ref>. Figure <ref> shows the ASKAP and GLEAM flux densities calculated in this paper plotted with flux densities taken from <cit.> and references therein. The spectral indices seen on the plots are calculated using the slopes of the fit lines, which are made using the literature values as well as the ASKAP and GLEAM values. We use this second method of calculating spectral indices to evaluate whether or not the indices found using only the ASKAP and GLEAM data are reliable. G326.3-1.8 is a composite source that has been shown to have an intermediate index (∼-0.3) with a flatter component coming from the PWN and a steeper component coming from the shell <cit.>. The values we obtain are in moderate agreement with each other and are consistent with the expected value for this source. G327.1-1.1 is also a composite source that is believed to have an intermediate index <cit.>. The flux value we obtain from the GLEAM data appears to be too low, as demonstrated by the plot, resulting in a spectral index that is likely too flat. The value obtained from the fit is consistent with what is expected and is likely more reliable. G327.2-0.1 is believed to be a shell-type remnant though it is associated with a known magnetar and could be a composite source. The values we obtain are in moderate agreement though they have relatively large errors. Both are consistent with what is expected for non-thermal emission from a shell-type remnant. G327.4+0.4 is also a shell-type remnant. The values we obtain are not in agreement though both are consistent with the expected index of a shell-type remnant. The plot appears to indicate that the GLEAM flux value may be too low. G327.4+1.0 is a shell-type remnant for which there was previously only one flux measurement <cit.> so the spectral index derived from the linear fit may be less reliable. The values we obtain are consistent with each other and although they are flat for a shell-type remnant, they are not unreasonable for this type of source. Given that many of the GLEAM fluxes seem to be lower than expected, and because our fit is based on only three data points, the spectral index of this source may be steeper than the values we have calculated here. G328.4+0.2 is a shell-less PWN that has been shown to have a flat spectral index <cit.>. The values we obtain are consistent with each other and with what is expected for a typical pulsar wind nebula. While the spectral indices we calculate using these two methods are generally in agreement with each other and with the expected values for the corresponding SNR types, many of these values have relatively large errors. The values obtained from the linear fit should be considered to be more reliable than the values provided in Table <ref> as the deviations seem to mostly be the result of lower than expected values for the GLEAM fluxes, particularly for the fainter remnants. Thus, the spectral indices we calculate for our candidates using this data should be viewed with a level of skepticism. High resolution data at a second frequency will likely be required to produce reliable spectral indices for the SNR candidates. § SNR CANDIDATES The locations of our SNR candidates are indicated in Figure <ref>. Notably, the candidates are highly concentrated in the lower right corner of the image. We believe we were able to detect more candidates in this part of the field because of the relatively low density of HII regions compared to the upper left half of the image. This may seem to contradict what is expected as the true SNR density is likely to be higher within spiral arms, not away from them. Thus, we believe it is highly probable that there are faint sources within the Norma arm region of the field that we were unable to detect due to the high concentration of thermal emission and HII regions. Data collected for known and candidate supernova remnants is shown in Table <ref>. Right ascension and declination are determined by fitting an ellipse to the source using CARTA software <cit.> and taking the central coordinates of the ellipse. The sizes of the sources are determined by taking the major and minor axes of these ellipses. §.§ Characteristics of SNR Candidates Here we list 21 supernova remnant candidates, three times the number of known SNRs in this field. We believe some of these sources to be strong SNR candidates while others are weaker and will require further observations to determine if they are indeed SNRs. We define the strength of our candidates based on whether or not we are able to find evidence of other expected SNR properties, such as polarization or a steep negative spectral index. Only two of the candidates were visible in the GLEAM data, allowing us to calculate spectral indices, and only one shows clear evidence of real polarization. The 933 MHz images of the candidates can be found in Figure <ref>. More detailed images of the SNR candidates in both radio and MIR, which include coordinate axes and colour bars, can be found in Appendix <ref>. *G323.2-1.0 This source has the morphology of a pulsar wind nebula with a very faint shell. As shown in Figure <ref>, there is an infrared source that overlaps with the PWN but the IR source has a different morphology and has been identified as a star by <cit.>. The shell is roughly circular and the PWN has a similar shape to the PWN of G326.3-1.8, but elongated along the east-west axis. This source was listed as an SNR candidate by <cit.> but was not confirmed and no image was provided. *G323.6-1.1, G323.6-0.8, G323.9-1.1 This object is currently classified as a single source in Green's catalogue, G323.7-1.0 <cit.>. The ASKAP observations have revealed faint filamentary structures that were previously not visible, leading us to believe that it is actually three separate overlapping sources. As shown in Figure <ref>, there are two brighter, elliptically shaped sources, one located to the northwest (G323.6-0.8) and the other to the southeast (G323.9-1.1). The third source (G323.6-1.1) appears to lie behind the other two sources and only a faint southwestern edge can be seen. Because this source is very faint and overlaps with the other sources, it was not possible to obtain a flux estimate. For the two brighter sources, only rough upper limits could be obtained for the flux densities. It is unclear if all three sources are supernova remnants but none of them has an obvious MIR counterpart. *G323.7+0.0 The source shown in Figure <ref> is a small shell-like structure that is roughly circular and brightest along the southern edge. There is a small amount of overlap with a known HII region in the southeast and a larger HII region can be seen to the northeast. There is some faint MIR emission to the east that could be related to the radio emission. Since the emission is faint and the relation is not entirely clear, we include the source in our list of candidates. *G324.1-0.2 This candidate does not have a clear shell-structure but it has a roughly circular shape with well-defined rounded edges, characteristic of a shock front (Figure <ref>). There are several overlapping point sources and a bright HII region can be seen in the northwest. There is no clear MIR counterpart. *G324.1+0.0 This source was originally identified as an SNR candidate by <cit.> but an image was not included. <cit.> have also listed it as a candidate and provided an image but there was insufficient evidence for it to be included in the <cit.> catalogue. The source, seen in Figure <ref>, is an elliptical shell, elongated in the east-west direction, with the brightest emission coming from the north and a fainter shell visible in the south. Multiple HII regions can be seen in the image and there is some overlap between the SNR candidate and HII regions in the northwest and northeast. This candidate is visible in the GLEAM data and a spectral index of -0.3 ± 0.2 was determined. The index indicates the emission may be nonthermal but the uncertainty is too large to be conclusive. Because this source has been studied previously and has a very clear shell-like morphology, we believe it should be classified as an SNR. *G324.3+0.2 The source shown in Figure <ref> has a shell structure that is almost perfectly circular, with some brightening towards the southeast. Based on the distinct morphology and clear lack of an MIR counterpart, we believe this candidate should be classified as an SNR. A large, bright HII region can be seen to the west of the source. *G324.4-0.4 In Figure <ref> we see an elliptical shell with brightening along the southwest edge. There are many overlapping point sources and a couple of known HII regions located to the northeast. There is overlapping emission in the MIR but nothing that clearly mirrors the shell structure seen in the radio. *G324.4-0.2 The source shown in Figure <ref> is a very small, faint shell-like structure with an overlapping point source. It is located just north of G324.4-0.4 and can also be seen in Figure <ref>. There is no obvious MIR counterpart. *G324.7+0.0 This candidate is a partial shell that arcs in the southern direction with no clear northern counterpart though some faint emission can be seen extending north from the southern shell (Figure <ref>). There is a bright overlapping point source in the east. Several bright point sources can be seen in the MIR but the shell has no counterpart. *G324.8-0.1 As shown in Figure <ref>, this source consists of faint filaments that form a large ellipse. The source overlaps with G324.7+0.0 and with a large HII region in the northeast. Because of this overlap, only a rough upper limit could be obtained for the flux. The rounded filaments do not appear to have an MIR counterpart. *G325.0-0.5 This candidate, shown in Figure <ref>, is composed of filaments that form a roughly elliptical structure. Many overlapping sources can be seen, including G325.0-0.3 and several HII regions. A long filament, which can be seen in MIR and is thus likely thermal, runs from the northeast to the southwest. The filaments that form the edges of the ellipse are not visible in MIR. Because of the overlapping emission, only an upper limit could be obtained for the flux. *G325.0-0.3 This candidate was originally identified by <cit.> (with no image) and later studied by <cit.> who provided an image but the source is not included in the <cit.> catalogue. In Figure <ref> a clear shell structure can be seen that is brightest to the east. A filament of emission can be seen to the east of the candidate but it is likely thermal and unrelated. The candidate has no MIR counterpart. Based on our observations and the previous studies, we believe this source should be classified as an SNR. *G325.0+0.2 This source consists of roughly circular emission that is brightened to the west where the edge of the source is the most well-defined (Figure <ref>). A bright overlapping point source can be seen to the east. Emission can be seen in the MIR but it does not have the same rounded structure as the radio source. *G325.8-2.1 This object is composed of very faint filaments of emission found to the southwest of the bright known SNR G326.3-1.8. As shown in Figure <ref> the source overlaps with negative bowls of emission, likely caused by missing short spacings. The filaments can be seen most clearly in the southeast but some faint structures are also visible in the northwest. This source is only visible because it is located at a far distance from the Galactic plane and there is little thermal emission. The source was too faint to obtain an estimate of the flux density. *G325.8+0.3 This candidate is composed of rounded filamentary structures that can be seen to the north and southeast in Figure <ref>. The source overlaps with a large HII region and many point sources. Missing short interferometer spacings generate a negative bowl around these strong emitters. Because of this, and because the filaments are so faint, we were unable to obtain a flux estimate for the source. *G327.1+0.9 This candidate is a small, roughly circular source that appears to have a brightened shell-like edge to the east (Figure <ref>). This is our smallest source with a size of 2' and we believe that this marks the approximate lower size limit of sources in which we would be able to detect a shell-like structure, given the resolution of our data. This source appears in <cit.> in their table of 24 μm nebulae compiled using data from the Multiband Imaging Photometer for Spitzer (MIPS). There is a circular MIR nebula at 24 μm (diameter of 1.6') with a total flux of 1.4 Jy <cit.>, with a compact source in the infrared visible at the centre. Since there is no counterpart at the WISE 12 μm image (see Figure <ref>, F_12<0.05 Jy), the source meets our criteria for a candidate. However, <cit.> identify this source as a possible wind driven bubble around a Wolf-Rayet (WR) star. The dusty nebulae around WR stars can be bright at 22 μm and relatively faint at ∼12 μm, with spectroscopy studies suggesting that most emission toward WR nebulae at ∼12 μm comes from material along the line of sight <cit.>. Since the winds from Wolf-Rayet stars can be detected in the 1.4 GHz radio continuum <cit.>, this candidate could be a dusty WR nebula. However, G327.1+0.9 is an extended shell-like radio source and the radio emission from Wolf-Rayet winds is typically centrally peaked and does not extend much further than 1000 stellar radii <cit.>. In this case we propose that there are three possible explanations for G327.1+0.9. The first would be a late stage HII region produced by the central Wolf-Rayet star. The second would be a planetary nebula <cit.>. Finally, it could be the SNR of a supernova that exploded in a binary system. Follow-up polarimetric observations at high radio frequencies with ATCA should help to solve this mystery, as only the SNR would show linearly polarized radio emission. *G328.0+0.7 The source shown in Figure <ref> is composed of roughly circular emission with a well-defined edge that is brightest in the northern part of the shell. There is a point source located near the centre of the candidate that coincides with the location of the known young (∼ 37 kyrs) pulsar J1551-5310 <cit.>. Several HII regions can be seen in the south. There is possibly polarization associated with the northwest edge of the shell but it cannot be definitely differentiated from the background emission, as shown in Figure <ref>. We do see clear polarization coming from the central pulsar with a rotation measure of -1017±5 rad/m^2, consistent with the catalogued value of -1023.3 ± 6.3 rad/m^2 <cit.>. Because of the possible association with a young pulsar and detectable polarization, we believe this source should be classified as a supernova remnant. *G328.6+0.0 This source was observed by <cit.> and is listed by <cit.> as an SNR candidate but it does not appear in the actual catalogue. Here we see elongated filaments that form a roughly elliptical shape, which could be two separate sources (Figure <ref>). There is a well-defined shell-like edge in the southwest and less defined shell-like filaments in the northeast. There are several overlapping HII regions and an unidentified bright point source located near the geometric centre of the southwest arc that may be polarized. The very bright source to the northwest of the image is G328.4+0.2. This source is visible in the GLEAM data and a steep negative spectral index of -0.75 ± 0.06 was determined, indicating the emission is likely nonthermal. We believe this source should also be classified as a supernova remnant. *G330.2-1.6 This source, shown in Figure <ref>, is very faint but it is located at a far distance from the Galactic plane making it detectable. It has a clear circular shell-like structure with little to no emission coming from the centre and no obvious MIR counterpart. The filamentary emission seen in the southeast appears to be unrelated and may instead be a radio galaxy, with a bright core, two external bright spots, and internal lobes, indicating jet precession. § DISCUSSION §.§ Estimating the SNR Density of the Galactic Disk Based on our estimate of the size of the Galactic SNR population, we can roughly estimate the theoretical SNR surface density of the Galactic disk. Since most SNRs should be found within the star-forming disk, we assume a Galactic radius of 12.5 kpc, which encompasses 95% of the stellar mass of the Milky Way disk <cit.>. We estimate that the Milky Way should have at least 1000-2700 radio-bright SNRs. This corresponds to an average Galactic SNR surface density between 2.0-5.5 SNRs/kpc^2. We can compare this range of values to the density of known Galactic SNRs. The University of Manitoba's SNRcat <cit.> currently lists 383 SNRs and SNR candidates. Of these, 369 have been detected in the radio. Thus, the Galaxy has an average known SNR surface density of only 0.75 radio-bright SNRs/kpc^2. To gain further insight into the missing SNR population, we can analyze how the known SNRs are distributed within the Galaxy by quadrant. Quadrants I and IV look towards the Galactic centre and encompass a larger, but denser, part of the Galactic plane. Quadrants II and III look away from the Galactic centre with shorter lines of sight and less emission. The known SNR surface density distribution can be broken down by quadrant as follows: * Quadrant I (0^∘≤ l<90^∘): 0.74 SNRs/kpc^2 * Quadrant II (90^∘≤ l<180^∘): 1.23 SNRs/kpc^2 * Quadrant III (180^∘≤ l<270^∘): 0.63 SNRs/kpc^2 * Quadrant IV (270^∘≤ l<360^∘): 0.72 SNRs/kpc^2 We see that Quadrant II has a noticeably higher density of known SNRs than the other quadrants. It is likely higher than Quadrants I and IV because SNRs in Quadrant II are closer (on average) and there is less thermal emission in this direction. Quadrant II is also relatively well-surveyed which may explain why it has a higher density of known SNRs than Quadrant III. The EMU/POSSUM Galactic pilot II field (323^∘≤ l<330^∘) has 7 known SNRs giving it an SNR surface density of 0.34 SNRs/kpc^2, around a factor of 2 lower than the Quadrant IV average. To achieve the theoretical average Galactic SNR density of 2.0-5.5 SNRs/kpc^2 we would need to find 34-106 new SNRs in this field. If we include our 21 SNR candidates, the density of known SNRs is brought up to 1.36 SNRs/kpc^2, which is comparable to the SNR density we see in Quadrant II. §.§ The Known Galactic SNR Population Figure <ref> shows how the angular sizes and flux densities of our SNR candidates compare to the known Galactic SNR population, not including the candidates for which we were unable to determine a reliable flux density. This plot was made using data from <cit.>, omitting sources where no flux estimate was given. It should be noted that 1 GHz flux densities are provided in Green's catalogue and our 933 MHz values were not adjusted to this frequency since we do not have spectral indices for most of the candidates. However, given the small frequency difference relative to the scale of the plot, the changes in the positions of the points are insignificant, even for sources with steep spectra. The shaded regions of the plot indicate what we estimate to be the limits of what can be detected with our data. The minimum size was chosen to be 2' as this is the size of our smallest source and we believe it would be difficult to detect shell-like structures for smaller sources given the 18" resolution. The minimum surface brightness was based on an estimate of the minimal thermal noise (taken to be σ = 40 μJy/beam), though variations across the image likely set different limits for different regions. The grey dotted lines in the plot represent the σ and 3σ surface brightness limits. Because we are studying extended sources, rather than point sources, we are able to detect sources below the theoretical thermal noise limit. Our results support that there is likely still a large population of undetected supernova remnants within our Galaxy. Figure <ref> demonstrates that many of our candidates are smaller and fainter than most known SNRs. This may be evidence that some of the missing SNR population was missing because it was previously undetectable. Improvements in the angular resolution and sensitivity of radio telescopes should then allow for the detection of some of these SNRs in future sky surveys. It may be that improvements in angular resolution are more important as the technological surface brightness lower limit is approaching the limit set by the background emission <cit.>. However, future improvements in angular resolution may also have limited impact in Galactic SNR observations. Given that we have shown we are capable of detecting shell-like structures in sources as small as 2', even at the farthest Galactic distances within this field (∼18 kpc) we should be able to detect sources with linear sizes as small as 10 pc. Based on observations of other Local Group galaxies <cit.> and given that X-ray observations of young (small) Galactic SNRs are believed to be fairly complete <cit.>, it is unlikely that there are many undetected sources below this size limit. Thus, any remaining missing SNRs in this field are most likely obscured by superimposed radio-bright sources, particularly HII regions. The impact of background emission on SNR detection can be further demonstrated by the fact that we were not able to detect new sources in the brighter part of the field that we believe to be associated with the Norma arm. In fact, almost all of our candidates are in the lower right corner of the field, as shown in Figure <ref>. Theoretically, there should be a higher density of supernova remnants within and near spiral arms. If we compare the bright upper left half of the image to the lower right half, including our candidates the former has an SNR density of ∼1 SNR/kpc^2 while the latter has a density of ∼2 SNRs/kpc^2, which is the theoretical lower limit we derived in Section <ref>. While the actual SNR density should be higher in the upper left half, we find the observed SNR density to be higher in the lower right half. This indicates there is likely still a population of SNRs within the bright part of the field that we were unable to detect. Further, this demonstrates the impact of background emission in setting detectability limits and supports that technological improvements in sensitivity may no longer be as effective for detecting new radio SNRs in these types of regions. § CONCLUSIONS In this paper we used pilot data from ASKAP to study the known supernova remnant population in a small field of the Galactic plane. We also found 21 SNR candidates, three times the number of known SNRs in this field. Of the candidates, 13 have not been previously studied, 4 have been studied as SNR candidates, 3 classed as a single SNR, and 1 studied as an MIR nebula. For most candidates, observations at a second, ideally higher, frequency are required to confirm the sources as SNRs as these observations would likely provide more information about polarization and spectral indices. The results of this paper demonstrate the potential for the full EMU/POSSUM surveys, taking place over the next few years, to fill in some of the missing Galactic supernova remnant population through the detection of small and/or faint sources. We were able to detect sources that seem to have been missed in previous surveys due to their small angular size and/or low surface brightness. Comparing the properties of our candidates to the known Galactic SNR population further supports this. Future work using ASKAP data to expand upon the size of the surveyed Galactic field will likely detect more of these types of sources, allowing for a better characterization of the Galactic SNR population. In this field we have uncovered approximately 1 new SNR/kpc^2. The full EMU/POSSUM surveys will cover roughly 60% of the Galactic plane meaning they could hypothetically uncover around 300 new SNRs. However, it is important to note the impact of background emission and the challenges we faced in detecting low surface brightness SNRs in complex regions. Because a significant portion of the surveyed area will be in the direction of the Galactic centre, it is unlikely that we will be as successful in these denser, brighter fields. The missing supernova remnant problem and the size of the Galactic SNR population remain difficult problems to exactly quantify. While technological advancements have continued to lead to new SNR detections, there is likely a population of SNRs that will never be observed due to detection limits set by the local background. Estimates of the size of the Galactic SNR discrepancy should then exclude these types of sources as they may be considered undetectable. In future work we hope to further explore this problem, utilizing the full EMU/POSSUM surveys to study variations in SNR detectability across the Galactic plane. § ACKNOWLEDGEMENTS We are grateful to an anonymous referee whose comments improved the quality of the paper. This scientific work uses data obtained from Inyarrimanha Ilgari Bundara / the Murchison Radio-astronomy Observatory. We acknowledge the Wajarri Yamaji People as the Traditional Owners and native title holders of the Observatory site. CSIRO’s ASKAP radio telescope is part of the Australia Telescope National Facility (https://ror.org/05qajvd42). Operation of ASKAP is funded by the Australian Government with support from the National Collaborative Research Infrastructure Strategy. ASKAP uses the resources of the Pawsey Supercomputing Research Centre. Establishment of ASKAP, Inyarrimanha Ilgari Bundara, the CSIRO Murchison Radio-astronomy Observatory and the Pawsey Supercomputing Research Centre are initiatives of the Australian Government, with support from the Government of Western Australia and the Science and Industry Endowment Fund. The POSSUM project (https://possum-survey.org) has been made possible through funding from the Australian Research Council, the Natural Sciences and Engineering Research Council of Canada (NSERC), the Canada Research Chairs Program, and the Canada Foundation for Innovation. This publication makes use of data products from the Wide-field Infrared Survey Explorer, which is a joint project of the University of California, Los Angeles, and the Jet Propulsion Laboratory/California Institute of Technology, funded by the National Aeronautics and Space Administration. RK acknowledges the support of NSERC, funding reference number RGPIN-2020-04853. ER acknowledges the support of NSERC, funding reference number RGPIN-2022-03499. The Dunlap Institute is funded through an endowment established by the David Dunlap family and the University of Toronto. B.M.G. acknowledges the support of NSERC through grant RGPIN-2022-03163, and of the Canada Research Chairs program. DL acknowledges the support of NSERC. § DATA AVAILABILITY ASKAP data are available on the CSIRO ASKAP Science Data Archive (CASDA). (https://research.csiro.au/casda/) mnras § KNOWN SUPERNOVA REMNANTS In Figures <ref> to <ref> we present images of the known SNRs in the EMU/POSSUM Galactic pilot II field. Here we show the 933 MHz radio images from ASKAP, the same region in the MIR at 12 μm using data from WISE, and a composite image with radio emission shown in red and MIR in blue. § SUPERNOVA REMNANT CANDIDATES In Figures <ref> to <ref> we present images of the SNR candidates in the EMU/POSSUM Galactic pilot II field using the same format described in Appendix <ref>. These are sources that do not appear in the <cit.> radio SNR catalogue.
http://arxiv.org/abs/2307.03242v1
20230706181058
LFA-tuned matrix-free multigrid method for the elastic Helmholtz equation
[ "Rachel Yovel", "Eran Treister" ]
math.NA
[ "math.NA", "cs.NA", "68", "G.1.8" ]
That’s BAD: Blind Anomaly Detection by Implicit Local Feature Clustering Jie Zhang^1     Masanori Suganuma^1     Takayuki Okatani^1,2 ^1Graduate School of Information Sciences, Tohoku University      ^2RIKEN Center for AIP {jzhang,suganuma,okatani}@vision.is.tohoku.ac.jp =================================================================================================================================================================================================================== We present an efficient matrix-free geometric multigrid method for the elastic Helmholtz equation, and a suitable discretization. Many discretization methods had been considered in the literature for the Helmholtz equations, as well as many solvers and preconditioners, some of which are adapted for the elastic version of the equation. However, there is very little work considering the reciprocity of discretization and a solver. In this work, we aim to bridge this gap. By choosing an appropriate stencil for re-discretization of the equation on the coarse grid, we develop a multigrid method that can be easily implemented as matrix-free, relying on stencils rather than sparse matrices. This is crucial for efficient implementation on modern hardware. Using two-grid local Fourier analysis, we validate the compatibility of our discretization with our solver, and tune a choice of weights for the stencil for which the convergence rate of the multigrid cycle is optimal. It results in a scalable multigrid preconditioner that can tackle large real-world 3D scenarios. Elastic wave modeling, elastic Helmholtz equation, shifted Laplacian multigrid, finite differences, high-order discretizations, full waveform inversion, parallel computations. 65N55, 74B99, 35J05, 65N06, 65F10 § INTRODUCTION The Helmholtz equation, also known as the time-harmonic wave equation, is widely used for modeling propagation of waves in either acoustic or elastic media. Its applications include acoustics <cit.>, electromagnetic radiation modeling <cit.>, seismic modeling <cit.> and fluid dynamics <cit.>. The most common application is full waveform inversion (FWI) <cit.>, an inverse problem of estimating wave velocity distribution within a given domain based on observations from the boundaries. FWI is a central tool in gas and oil exploration and human brain tomography. In some of these applications the acoustic Helmholtz equation is not sufficient, and the wave propagation must be modeled by the elastic Helmholtz equation <cit.>. Discretizing the Helmholtz equation yields a large and indefinite linear system. Both of these properties make the system difficult to solve even by sophisticated iterative methods such as multigrid <cit.>. Moreover, modeling waves at large wavenumbers requires very fine grids. As a rule of thumb for standard finite differences discretizations, about 10 grid points per wavelength are used <cit.>, leading to systems that can involve hundreds of millions of unknowns. For the elastic Helmholtz equation, the linear system is even larger, for two reasons: first, it is a system of PDEs, and second, shear waves typically have a higher wavenumber compared to pressure waves, so finer meshes are required to model them <cit.>. When solving the equation numerically, we must take into account both the discretization and the solver. Despite the relationship between the two, in many works they are considered separately. Generally speaking, when the discretization is more sophisticated, the solver has to perform more work. A simple example of this phenomenon is the increased fill-in that occurs in direct solvers, when using high-order discretizations done by wide (non-compact) stencils. In this work we solve the elastic Helmholtz equation by a choosing the discretization in accordance with the requirements of the solution method. There is a variety of discretizations for the Helmholtz equations. The acoustic equation has the standard second-order central-difference stencil, and also compact stencils of fourth- and sixth-order <cit.>. These compact stencils are efficient when applying direct solvers and parallel iterative solvers on multicore processors like graphical processing units (GPUs). In the works <cit.>, combining discretizations on rotated grids to lower the numerical dispersion is discussed. It yields 9- or 27-point compact stencils with un-lumped mass terms which end up similar to <cit.>. In all these high-order discretizations, typically a smaller number of grid-points per wavelength suffices. For the elastic version, a compact nodal discretization is suggested <cit.>, as well as a staggered discretization <cit.>, which is more stable in the nearly incompressible case. There is also research about lowering numerical dispersion <cit.>. However, for the staggered case the high-order discretizations are obtained by wide non-compact stencils. For example, fourth-order finite difference first derivatives in a single-dimension are used for the gradient operators in <cit.>. In fact, <cit.> shows that the dispersion is minimized with slightly different coefficients than the classical fourth-order terms in <cit.>, hence the order of discretization is not necessarily the most important feature. Regardless, to the best of our knowledge, the available high-order schemes for the elastic equation using staggered grids are non-compact, as the extension of schemes like <cit.> to the elastic staggered case are not straightforward <cit.>. Concerning solvers, there are many approaches to the solution of the acoustic problem, mainly by domain decomposition <cit.> and shifted Laplacian multigrid <cit.>, but also other methods <cit.>. Fewer solvers are available for the elastic Helmholtz equation <cit.>. In this work we focus on a shifted Laplacian multigrid based solver. In our previous work <cit.> we have shown that the mixed formulation of the elastic equation, in addition to box-smoothing, enables efficient solution of the elastic Helmholtz equation using shifted Laplacian multigrid. Also, in <cit.> we used a standard second-order discretization accompanied with Galerkin coarsening. This coarsening strategy is natural to implement using sparse matrix computations, which have low arithmetic intensity and require excessive memory. Coarsening by re-discretization makes the solver more suitable for GPU computation and saves memory, since the stencil is explicitly given. Unfortunately, as we show and analyze in Section <ref>, the multigrid cycle presented in <cit.> shows poor convergence when using the standard second-order stencil for re-discretization. In this work we develop a finite difference discretization for the elastic Helmholtz equation on a staggered grid using compact stencils (i.e., using 9-point or 27-point operators in 2D and 3D, respectively). We show that our discretization is both adequate in terms of accuracy and is suitable for stencil-based geometric multigrid to be efficient. We use mixed formulation for the elastic Helmholtz equation, and inspired by previous works <cit.>, suggest a compact discretization scheme. Using local Fourier analysis (LFA), we tune the weights of the stencil such that the multigrid solver converges most efficiently, and we demonstrate that for the same weights, the numerical dispersion is low. We show that our discretization enables the geometric multigrid solver to solve the elastic Helmholtz equation even using as few as 8 grid-points per wavelength, while keeping at least the same efficiency as the standard stencil gives for 10 grid-points per wavelength. The paper is organized as follows: in Section <ref> some background about multigrid methods and discretizations for the elastic Helmholtz equation is presented. In Section <ref> we derive our method, including the discretization scheme and the solution method. In Section <ref> we hold two-grid LFA for the system and provide theoretical results, from which the weights of the stencil are determined. We demonstrate the performance of our method in Section <ref>, and briefly conclude in Section <ref>. § MATHEMATICAL BACKGROUND In this section we give a brief mathematical background. In Subsection <ref> we introduce the acoustic and elastic Helmholtz equations, and derive the mixed formulation of the elastic equation. In Subsection <ref> we give some general multigrid preliminaries and introduce the shifted Laplacian multigrid preconditioner, and finally in Subsection <ref> we introduce the MAC discretization scheme and appropriate multigrid components. §.§ The Helmholtz equations The Helmholtz equation models wave propagation in the frequency domain, and it is in fact the Fourier transform of the wave equation. The acoustic version of the Helmholtz equation models acoustics and electromagnetic waves, whereas the elastic version models waves in solid media, such as earth's subsurface. Let p = p(x⃗), x⃗∈Ω be the Fourier transform of the wave's pressure field. The acoustic Helmholtz equation is given by: ρ(x⃗) ÷(ρ^-1(x⃗) p) + κ^2(x⃗) ω^2(1-γ) p = q(x⃗), where ω = 2π f is the angular frequency, κ > 0 is the “slowness” of the wave in the medium (the inverse of the wave velocity), γ represents a physical attenuation parameter and ρ > 0 is the density of the medium. The right-hand side q represents the sources in the system. To solve this equation numerically, we discretize it by a finite differences scheme in a finite domain and equip it with absorbing boundary conditions (ABC) <cit.> or Perfectly matched layers (PML) <cit.> that mimic the propagation of a wave in an open domain. Other boundary conditions can also be considered. The elastic version of the Helmholtz equation has several formulations. Here we focus on the equation in an isotropic medium. Let u⃗ = u⃗(x⃗) be the displacement vector. The elastic Helmholtz equation is given by each of the two following formulations: λ(x⃗)÷u⃗ + ÷⃗μ(x⃗)(u⃗+u⃗^T) +ρ(x⃗) ω^2(1-γ) u⃗ = q⃗_s(x⃗) or equivalently[This equivalence holds for homogeneous media. In the heterogeneous case, one may use the latter as a preconditioner for the former formulation.] (λ(x⃗) + μ(x⃗))÷u⃗ + ÷⃗μ(x⃗)u⃗ +ρ(x⃗) ω^2(1-γ) u⃗ = q⃗_s(x⃗), where μ and λ are the Lamé parameters, that define the stress-strain relationship in the media. As before, ρ is the density, ω is the frequency and γ is the attenuation. These parameters determine the pressure and shear wave velocities by V_p = √((λ+2μ)/ρ), and V_s = √(μ/ρ), respectively <cit.>. The term u⃗+u⃗^T is the symmetric strain tensor (factored by two). The term ÷⃗μu⃗ is the weighted diffusion operator applied on each of the components of the vector u⃗ separately. The Poisson ratio, defined by σ = λ/2(λ+μ), gives a good measure for deformation properties of the material, where most of the materials have 0<σ<0.5. The nearly incompressible case, where σ→ 0.5 or equivalently λ≫μ, is the most difficult case for this equation as the grad-div term turns dominant. The mixed formulation <cit.> is achieved by introducing a new pressure variable p = -(λ+μ)÷u⃗. The second formulation in (<ref>), together with the introduced pressure variable, gives the elastic Helmholtz equation the form: [ ÷⃗μ+ ρω^2(1-γ) -; ÷ 1/λ + μ Id ][ u⃗; p ] = [ q⃗_s; 0 ]. §.§ Shifted Laplacian multigrid Multigrid methods <cit.> are a family of iterative solvers for linear systems of the form A_h= where A_h is a descretized version of a given operator on a fine grid. The idea behind multigrid methods is taking advantage of the smoothing property of standard iterative methods, such as damped Jacobi and Gauss-Seidel, to reduce the high-frequency error components, and adding a complementary coarse grid correction process to take care of the low-frequency components. That is, we estimate and correct the error for some iterate ^(k) by solving — exactly — a coarser analogue of the problem. To define a coarse problem, one must translate the errors from the fine grid to the coarse grid and back, using intergrid operators called restriction and prolongation, respectively. Let P be the prolongation and R be the restriction. Let A_2h be the coarse grid operator, that approximates A_h on the coarse grid. Then, the coarse grid correction is given by solving the equation A_2h_2h = _2h = P^T(-A_h^(k)) and then interpolating the solution back to the fine grid: _h = P_2h. Algorithm <ref> summarizes this process. In matrix form, the two-grid operator is given by: TG = S^ν_2K S^ν_1 where S is the smoother's error propagation matrix, ν_1 and ν_2 are the number of pre- and post relaxations and K is the coarse grid correction: K = I-P A_2h^-1 R A_h. By applying Algorithm <ref> recursively with one recursive call, we obtain the multigrid V-cycle, and with two recursive calls we obtain a W-cycle. However, as we use more levels, the ratio of frequency to coarsest grid size grows, and standard cycles tend to diverge. A nice variant is obtained by applying the two recursive calls in the W-cycle as preconditioners for two steps of a simple Krylov method <cit.>. The additional cost includes two extra residual computations and a small number of vector operations on the coarse levels (e.g., inner products), which is not a lot considering the other costs. This variant is called K-cycle. We note that the coarse grid operator A_2h can be defined either by the Galerkin coarse approximation, as a matrix product A_2h=R A_h P or by discretization coarse approximation, namely, re-discretizing A (usually using the same stencil) on a coarser mesh. Galerkin coarse approximation is known to be better in terms of the convergence of the multigrd cycle, especially when nearly all the error components are in the range of the prolongation. However, discretization coarse approximation has computational advantages, since it is stencil-based, and can be easily implemented in a matrix-free manner. In this paper we use discretization coarse approximation, since we aim to develop a matrix-free solver. *Shifted Laplacian Standard multigrid methods are not effective for the solution of the acoustic Helmholtz equation (<ref>). The shifted Laplacian multigrid preconditioner suggested in <cit.> for the acoustic Helmholtz equation is based on the solution of an attenuated version of (<ref>) using multigrid. Let H be a matrix defined by a discretization of the Helmholtz operator. We define H_s = H - αω^2M_s, to be the shifted Helmholtz operator, where M_s is some mass matrix, and α>0 is a shifting parameter. Adding a complex shift to the Helmholtz equation — the wave equation in the frequency domain — is equivalent to adding a parabolic term, that is reflected in attenuation, to the wave equation in the time domain. The added shift α is usually much larger than the physical attenuation γ. The shifting is implemented by adding α to the physical attenuation γ. For the shifted version, geometric multigrid methods are efficient, and one can use H_s in (<ref>) as a preconditioner for a discretized Helmholtz linear system (<ref>) inside a suitable Krylov method such as (flexible) GMRES <cit.>. §.§ The MAC scheme for discretization Similarly to (<ref>), the elastic Helmholtz equation (<ref>) is usually discretized using finite-differences on a regular grid. As in any system of equations, one can consider nodal discretization, in which all the variables are located in the nodes, or staggered discretization, in which every variable has a different location. The staggering is used to enhance numerical stability. For many saddle-point systems, nodal grid discretization is known to cause checkerboard instability <cit.>. The MAC scheme is a common approach for a staggered grid based finite differences discretization. As depicted in Figure <ref>, the displacement components are located on the faces, and the pressure is located in the center of the cells. Originally, this scheme was developed for fluid flow problems <cit.>. It is also common for linear elasticity equation, e.g., <cit.>, and was used for the elastic Helmholtz equation in <cit.>. To complete the discretization process, the MAC scheme should be accompanied with stencils for each component of the equation. Choosing these stencils is the core of this work, as explained later. When using a multigrid approach to solve a problem discretized by the MAC scheme, we should define special intergrid operators as well as smoothers <cit.>. Here, we elaborate only on the choices that are relevant to our work, and in two dimensions. The extension to three dimension is straight-forward. For the pressure p, we take a low-order restriction and a higher-order bilinear interpolation: R_p= 1/4[ 1 1; * ; 1 1 ] and P_p = 1/16] 1 3 3 1 3 9 9 3 * 3 9 9 3 1 3 3 1 [ where [ ] represents an operator defined by a stencil, and ] [ represents the transpose of an operator defined by a stencil. The asterisk represents the center of the stencil. Note that the pressure is cell-centered and hence the center of the stencil (where the coarse pressure is located) is not one of the sampling points. Each of the displacement components, u_1 and u_2, is located on edges on one direction and is cell-centered on the other direction, (see Fig. <ref>). We choose full-weighting for the first direction and, similarly to the pressure, low-order restriction and bilinear interpolation for the other direction. For the vertical component u_2, it reads R_u_2 = 1/8[ 1 1; 2 * 2; 1 1 ] and P_u_2 = 1/8] 1 3 3 1 2 6 * 6 2 1 3 3 1 [, and the corresponding similar operators are used for u_1 as well, denoted by P_u_1 and R_u_1, respectively. Finally, the restriction and prolongation operators for the whole system are defined P=blockDiag(P_u_1,P_u_2,P_p) and R=blockDiag(R_u_1,R_u_2,R_p), where blockDiag forms a large block diagonal operator given the individual operators for its blocks. As a smoother, we use the Vanka box-smoother <cit.>, which was originally used for fluid flow, and later adapted as a smoother for linear elasticity equation <cit.>. In this relaxation method, a whole cell is relaxed simultaneously, see Figure <ref>. Namely, in each relaxation step we invert the 5× 5 submatrix of the fine grid operator (7× 7 in 3D), that involves the DOFs of the same cell, or an approximation of this submatrix. To the version where we approximate the displacement block of the submatrix by a diagonal only, we refer as economic Vanka. This version is used for the LFA predictions and comparative results in Subsections <ref> and <ref>. For the results in Subsections <ref> and <ref> we use the full Vanka smoother, where the whole submatrix is inverted. In both full and economic Vanka, a damping parameter is used to improve the smoothing. § METHOD In this section we derive our method. In <ref>, we first describe a discretization method for the elastic Helmholtz equation, followed by a sketch of our multigrid cycle in <ref>. §.§ Discretization We observe that the main block in the mixed formulation (<ref>), is a block-diagonal operator with acoustic Helmholtz operators on its diagonal. Keeping in mind this observation, we discretize the acoustic blocks using an existing high-order stencil. Then, to obtain a discretization of (<ref>), we seek for appropriate gradient and divergence discretizations. Our inspiration is the following stencil for the acoustic Helmholtz equation, suggested in <cit.> as a compact fourth-order discretization for (<ref>) in constant coefficients: H^HO = 1/h^2[ -1/6 -2/3 -1/6; -2/3 10/3 -2/3; -1/6 -2/3 -1/6 ] - κ^2 ω^2 (1-γ) [ 1/12 ; 1/12 2/3 1/12; 1/12 ]. This stencil was further validated in <cit.>, in the framework of shifted Laplacian multigrid for the acoustic Helmholtz equation. We observe that the stencil (<ref>) can be seen as H^β = -Δ_h^β -κ^2 M^β where Δ^β is a convex combination of the standard and skew Laplacian: -Δ_h^β = β1/h^2[ -1 ; -1 4 -1; -1 ] + (1-β) 1/2h^2[ -1 -1; 4 ; -1 -1 ] and M^β is a convex combination of the identity and spread mass matrices: M^β= ω^2 (1-γ) ( β[ 1 ] +(1-β)1/4[ 1 ; 1 1; 1 ]), with β=2/3 in both combinations. Our idea is to develop a parametrized discretization for the elastic Helmholtz equation in a similar manner, and tune the parameter β such that the multigrid cycle will converge well. When discretizing any of the formulations (<ref>), (<ref>) or (<ref>) of the elastic Helmholtz equation, discretizations of the gradient and divergence that yield the Laplacian stencil in (<ref>) by Δ =÷ are required[In fact, it is required for the acoustic Helmholtz equation (<ref>) as well, when considering non-constant coefficients.]. As depicted in Figure <ref>, a compact 9-point stencil for the Laplacian can be achieved by a spread divergence based on 6-point stencils for the first derivatives, and a gradient comprised of standard 2-point stencils for the first derivatives (or vice versa). Particularly, we discretize the horizontal first derivative as (∂_x_1)_h/2 = 1/h[ -1 * 1 ] and the β-spread version as: (∂_x_1)^β_h/2 = 1/h( β[ -1 * 1; ] + (1-β)·1/4[ -1 1; -2 * 2; -1 1 ]), with similar (rotated) stencils for the vertical first derivatives (∂_x_2)_h/2 and (∂_x_2)_h/2^β. The standard and β-spread gradients are given by: ∇_h = [ (∂_x_1)_h/2; (∂_x_2)_h/2 ], ∇_h^β = [ (∂_x_1)^β_h/2; (∂_x_2)^β_h/2 ]. The resulting Laplacian is ^T_h _h^β = -Δ_h^β. For β=2/3, it is the Laplacian term in (<ref>). Furthermore, the standard and β-spread divergence are given by (∇·)_h = [ (∂_x_1)_h/2 (∂_x_2)_h/2 ] , (∇·)_h^β = [ (∂_x_1)^β_h/2 (∂_x_2)^β_h/2 ]. Finally, we get the following discretized form of (<ref>): ℋ^β[ ; ] = [ ^T_h A_e() ^β_h - M^β A_f() (·)_h^T; (·)_h^β (1/+) ][ ; ] = [ _s; 0 ]. where A_e() is a diagonal matrix that averages the cell-centered on the edge centers, A_f() averages cell-centered on the face centers and (·) creates a diagonal matrix with cell-centered values on its diagonal. The shifted version ℋ^β_s is defined by adding α to the physical attenuation γ. For the acoustic Helmholtz equation, the choice β=2/3 in (<ref>) and (<ref>) is optimal, in terms of order of the discretization, as shown in <cit.> . In Section <ref> we show, using LFA, that β=2/3 is optimal for the discretization (<ref>) of the elastic Helmholtz equation in mixed formulation as well, in terms of two-grid convergence. §.§ The multigrid cycle Equipped with the discretization given in Section <ref>, we now complete the description of our method outlining the multigrid components that we use. As a relaxation method, we use the cell-wise Vanka smoother described in subsection <ref> and apply one pre- and one post-smoothing. For the ease of the LFA derivations in the next section, as well as for the related comparative results, we use economic Vanka in a lexicographic order. For the rest of the numerical results, we use the full Vanka smoother in red-black order to allow parallelism. As integrid operators, we use R and P from Section <ref>. Note that the restriction R is not a transpose of the prolongation up to a factor. This choice is made to compensate between high-order intergrid operators, that enables a good approximation of the fine error on the coarse grid, and lower-order intergrid operators, that use smaller stencils. Finally, we define the coarse grid operator by re-discretizing the elastic Helmholtz operator using the discretization (<ref>), only on a coarser grid. We use W-cycles on the shifted version of (<ref>) as a right preconditioner for the original equation inside the restarted FGMRES method. We use two, three or four levels in the multigrid hierarchy, and as we use more levels, a higher shift is required. Coarsest grid solution The choice of the coarsest grid solver is not trivial. It can be obtained using an LU decomposition, which is our choice for 2D problems. However, this option is not practical in 3D due to high memory consumption. Another option that can be used is the domain decomposition approach in <cit.>. This approach significantly helps memory-wise, but still includes the local LU decompositions, which are applied sequentially using forward and backward substitution and hence hinders parallelism on many-core devices like GPUs. Instead, in this work we use a parallel hybrid Kaczmarz preconditioner with GMRES, which does not require any special setup (both time and memory-wise), and can be applied in parallel. That is, we divide the domain into several sub-regions, and apply a few Kaczmarz relaxations (for each sub-region in parallel) as a preconditioner to GMRES, similarly to the approach in <cit.>. To apply it, we use the algebraic Schur complement, eliminating the variable (by inverting the diagonal - block), and essentially revert to the original formulation (<ref>). It is done to reduce the DOFs and eliminate the need of cell-wise operations like Vanka relaxations. Note that in the case of the full Vanka relaxation, we invert the full 5× 5 or 7× 7 submatrix of each cell, for 2D or 3D, respectively. However, by the structure of (<ref>), the main block that corresponds to the displacement variables is block-diagonal (containing two or three 2×2 sub-matrices). Thus, the submatrix that corresponds to each cell can be inverted with less operations and memory if this structure is exploited, similarly to the way the diagonal approximation is exploited in the economic Vanka variant. Utilizing the block structure, in 3D we require 25 floating numbers to store the memory for the inverted submatrix instead of 49 in the standard way. For comparison, we require 19 for the economic Vanka smoother. § LOCAL FOURIER ANALYSIS Local Fourier analysis (LFA) is a predictive tool for the convergence of multigrid cycles <cit.>. The simplest form of LFA is smoothing analysis: determining the smoothing properties of the relaxation method. Under the assumption that the coarse grid correction is ideal (a projection on the high frequencies), smoothing analysis suffices to predict the convergence rate of the two-grid cycle as a whole. However, sometimes the problem lies in the coarse grid correction itself, which is the case here: as shown in Subsection <ref>, despite the good smoothing, the standard stencil with β=1 shows poor convergence in practice. Hence, two grid analysis gives a better prediction here. §.§ LFA preliminaries The definition of a smoothing factor for a system of equations (see, e.g., <cit.>, Chapter 8) is given below. Let S be the error propagation matrix of the smoother, and let S̃(θ) be its matrix of symbols, where θ = [ θ_1 θ_2 ]^T∈[-π/2,3π/2]^2. Then μ_locsup_θ∈ T^highρ(S̃(θ)) where T^high=[-π/2,3π/2]^2 ∖[-π/2,π/2]^2. For the Vanka smoother particularly, and for overlapping smoothers generally, the calculation of the error propagation matrix requires special approaches <cit.>. For the smoothing analysis, we did computations similar to our previous work <cit.>, only with more non-zero elements in every stencil. The definition of the two-grid factor is given bellow: Let TG(θ) be the matrix of symbols of the two-grid operator (<ref>). Then ρ_locsup_θ∈ T^lowρ(TG(θ)) where T^low=[π/2,π/2]^2. Assuming a perfect coarse grid correction, the amplification factor of the smoother on high frequencies resembles the cycle as a whole. Namely, for an ideal coarse grid correction, μ_loc^ν = ρ_loc, where ν=ν_1+ν_2 is the total number of relaxation steps. Smoothing analysis works under the assumption that the Fourier modes are eigenfunctions of the smoother. This approximately holds locally when neglecting the effect of boundary conditions. However, the Fourier modes are not eigenfunctions of the two-grid operator (not even locally) since for any low frequency θ∈ T^low, there are three high frequencies that alias to θ. This gives rise to the following definition <cit.>: The 4-dimensional space of harmonics for θ is E(θ) = span{θ, θ', θ”, θ”' } where θ'=[ θ_1; θ_2 ] + [ π; π ] , θ” = [ θ_1; θ_2 ] + [ π; 0 ] and θ”' = [ θ_1; θ_2 ] + [ 0; π ] . The symbol matrix of the two-grid cycle is: TG(θ) = S̃(θ)^ν_2 (I-P̃(θ) Ã_2h^-1(θ) R̃(θ) Ã_h(θ)) S̃(θ)^ν_1 where the dimensions of each symbol matrix are determined by the space of harmonics. For the acoustic Helmholtz equation, a detailed description is given in <cit.>. As mentioned there, once we calculate the symbol of the smoother as a function of θ, we need to evaluate it in these four basis elements of the space of harmonics, and place it on a diagonal a 4× 4 matrix of symbols. In a similar manner, the symbol matrix of the fine grid operator is calculated. The restriction and prolongation symbol matrices are 1× 4 and 4 × 1 respectively, and the symbol of the coarse operator is a scalar. For our system of equations, the dimensions of the symbol matrices must be larger in correspondence to the number of variables. In 2D, we repeat this process for the variables u_1,u_2,p and get a 12× 12 matrix for the symbol of the two-grid operator. The fine operator's matrix of symbols, as well as the smoother's, is 12× 12, the matrix of symbols for the restriction and prolongation are 3× 12 and 12 × 3 respectively, and the symbol of the coarse grid is a 3× 3 matrix. §.§ Derivation of the two-grid symbol In this subsection we derive two-grid LFA for the discretization (<ref>) in 2D. As LFA is local, we can only predict convergence for the case of constant coefficients, for which (<ref>) takes the form: [ -Δ^β_h - M^β (∂_x_1)_h/2; -Δ^β_h - M^β (∂_x_2)_h/2; (∂_x_1)_h/2^β (∂_x_2)_h/2^β 1/+I ]. In order to calculate the symbol TG(θ) of the two-grid operator (<ref>), we calculate the symbol matrix of the smoother and the coarse grid correction. The Vanka smoother is an overlapping smoother, as some of the DOFs are corrected twice in each swap. The calculation of the symbol for overlapping smoothers is very lengthy, see <cit.>. In our previous work <cit.>, we gave the analysis for the case of standard stencils (5-point stencil for the acoustic Helmholtz block and 2-point stencils for the first derivatives). The generalization for a 9-point stencil for the acoustic block and 6-point stencil for the gradient's first derivatives is straightforward, and we omit the gory details. We denote by S_h(θ) the calculated scalar symbol of the Vanka smoother, and construct the smoothing matrices as following: S = Diag(S(θ), S(θ'), S(θ”), S(θ”')). For the coarse grid correction, we give a more detailed analysis of the symbol. To construct the symbol matrices of the restriction and the prolongation, we first compute the scalar symbol of each of its components. The scalar symbol of each of R_u_1,R_u_2,R_p,P_u_1,P_u_2 and P_p (whose stencils are given in (<ref>) and (<ref>)) is calculated directly from the stencil. For instance, R_u_1 (θ) = e^(θ_2/2-θ_1) + e^( θ_2/2+θ_1 ) + e^( -θ_2/2+θ_1 ) + e^( -θ_2/2-θ_1 ) + 2e^θ_2/2 2e^- θ_2/2 = 2cos(θ_1 +θ_2 /2) + 2cos(θ_1 - θ_2 / 2) + 4cos(θ_2 / 2). Applying it on each of the four harmonics gives, for the restriction, the vector R_u_1 = [ R_u_1(θ) R_u_1(θ') R_u_1 (θ”) R_u_1 (θ”') ]. For the prolongation, we construct a similar vector with transposed dimensions: P_u_1 = [ P_u_1(θ) P_u_1(θ') P_u_1 (θ”) P_u_1 (θ”') ]^T. In the same way, R_u_2, R_p, P_u_2, P_p are calculated, and finally, the symbol matrices of the intergrid operators are given by R = blockDiag(R_u_1, R_u_2, R_u_p) and P = blockDiag(P_u_1, P_u_2, P_u_p). The symbol matrix of the coarse grid operator, ℋ_H, is a 3× 3 matrix comprised of the symbols of each block in (<ref>). The symbol matrix of the fine grid operator ℋ_h, is a 12× 12 matrix, comprised of 4× 4 diagonal blocks: each of the scalar symbols of the blocks in (<ref>) is evaluated on each of the four harmonics from Definition <ref>, to form the diagonal blocks. Finally, the symbol matrix of the two-grid cycle is calculated by (<ref>). §.§ Tuning the stencil by LFA In this subsection we tune the discretization suggested in (<ref>), namely, determine the optimal β in (<ref>) using two-grid LFA. We do it by numerically searching for a value that minimizes ρ_loc from Definition <ref>. Our default parameters are density ρ=1 and Lamé coefficients λ=500, μ=1[The Lamé parameters λ and μ has units of pressure. However, their dimensionless ratio uniquely determines the Poisson ratio σ, so for simplicity, we omit their units throughout the numerical results.]. For this experiment we take a grid size of h=1/512, frequency ω=320 that corresponds to 10 grid-point per wavelength, and shift α=0.1. To choose the damping parameter w, we use smoothing analysis, since we observe that the optimal damping for the two-grid cycle has a strong dependence on the choice of the default parameters. We estimate the smoothing factor μ_loc from definition <ref> by sampling the Vanka smoother symbol for θ∈[-π/2,3π/2] with jumps of 0.01 in each component, and then taking the maximum over θ∈ T^high. Figure <ref> shows that the smoothing factor μ_loc is approximately minimized when w=0.75, for β=1, and when w=0.65, for β=2/3 (in fact, it is approximately minimized for all values of β when w=0.7). The attenuation has almost no effect on the smoothing factor <cit.>, and the smoothing factors are μ_loc^opt=0.55 for β=2/3 and μ_loc^opt=0.59 for β=1. Consequently, smoothing analysis is not delicate enough to distinguish between different choices of β, and two-grid analysis is necessary to tune our discretization. Figure <ref> shows the tuning of the stencil. We estimate the two-grid factor ρ_loc with 1 pre- and one post smoothing, by sampling the two-grid symbol over T^low in jumps of 0.01 and then taking the maximum. With the same default parameters and damping of w=0.7, we consider values of 0.5<β<1 with jumps of 0.001. We observe numerically that the minimal two-grid factor ρ_loc = 0.38 is achieved when β_opt=0.667≈ 2/3. Next, we investigate the influence of the shift and the number of grid points per wavelength on the convergence. Figure <ref> shows the two-grid factor as a function of the shift α, for the above-mentioned default parameters. We observe that for a frequency that correspond to 10 grid-points per wavelength, when β=2/3, an attenuation of α_min=0.03 suffices for convergence, whereas for β=1, a larger shift of α=0.12 is required. In practice, when the shift is smaller, the preconditioner is closer to the original system, which may improve the GMRES performance. Figure <ref> shows ρ_loc as a function of ω, the angular frequency[The jumps in the graph occur since for some values of ω, the operator has a zero eigenvalue.]. When β=2/3 and α=0.1, the two-grid cycle still converges for about ω_max=500, which corresponds to about 6.5 grid-points per wavelength. For comparison, using the same shift, the standard discretization requires 11 grid points per wavelength to converge. In the next section we compare between the discretization (<ref>) with β=1 (namely, the standard 5-point stencil), and with β=2/3. § NUMERICAL RESULTS In this section we show results for 2D and 3D problems. Some of our examples are based on geophysical models, in which the length of the domain is high (about 20km) compared to its depth (about 5km). We take the right-hand side to be a point source located at middle of the top row (or, in 3D, the top surface) of the domain. In all the examples (except, of course, the LFA), we use an absorbing boundary layer of 20 cells, and assume that the unpreconditioned equation (<ref>) has a small physical attenuation of γ = 0.01. In Subsection <ref> we use a two-level V(1,1)-cycle, with Vanka smoother in lexicographic order, to compare the actual convergence rate of the cycle with the LFA predictions. In Subsections <ref> and <ref>, we use the preconditioned GMRES(5) Krylov solver <cit.>, preconditioned by W(1,1)-cycles solving the shifted version with Vanka cell-wise relaxation applied in red-black order. We seek a solution with relative residual accuracy of 10^-6, starting from a zero initial guess. We write our code in the Julia language <cit.>, and include it as a part of the jInv.jl package <cit.>. This package enables the use of our code as a forward solver for three-dimensional elastic full waveform inversion in the frequency domain. We compute the tests on a workstation with Intel Xeon Gold 5117 2GHz X 2 (14 cores per socket) with 256 GB RAM, that runs on Centos 7 Linux distribution. Demonstration of Numerical Dispersion First, we demonstrate the effect of our stencil on the numerical dispersion. For the demonstration, we solve the 2D homogeneous elastic Helmholtz equation in mixed formulation by a direct inversion of the matrix. We solve the equation on the dimensionless domain [0,2]×[0,1] with frequency ω=50π, density ρ = 16 and Lamé parameters λ=16 and μ=1. We use an absorbing boundary layer of 20 grid points. Figure <ref> depicts a vertical section of the real part of the pressure component of the solution, starting from the middle of the top row, where the point source is located. We compare between the solution for 24 grid points per wavelength, which we refer to as the “real” solution obtained on a grid sized 1024×512, compared to the solutions for 12 and 6 grid points per wavelength, referred to as the fine and coarse grid solutions respectively, obtained on grids of sizes 512×256 and 256×128 respectively. Figure <ref> demonstrates that when discretizing the equation with the standard discretization, the fine grid operator represents a significantly dispersed wave compared to the real solution, and the coarse grid solution is not only dispersed relatively to the real solution, but also fails to resemble the fine grid solution. Figure <ref> shows that the numerical dispersion is much lower when using our stencil. Although our choice of β is optimal in terms of two-grid analysis, determining its optimality in terms of numerical dispersion is beyond the scope of this work. §.§ LFA predictions vs. multigrid performance In this subsection we show the expected multigrid performance of our discretization in the 2D homogeneous case (<ref>) compared to the actual multigrid performance. We use the smoothing factor μ_loc from Definition <ref> to predict the convergence rate of the multigrid cycle assuming an ideal coarse grid correction. More explicitly, for a cycle with a total number of ν pre- and post smoothing steps, we use μ_loc^ν as a measure for the best possible convergence rate that we can hope for in our problem. In our two-grid LFA, we use the two-grid factor ρ_loc=ρ_loc(ν), from definition <ref>, to predict the convergence of the two-grid cycle. Finally, we compare the results to the convergence factor in practice, defined c_f^(k) = (r_k/r_0)^1/k where r_0 is the residual in the error-residual equation of the two-grid operator (<ref>) after a warm-up of 5 iterations, and r_k is the residual after k more iterations. We take k to be the smallest number of iterations such that r_k<10^-9. In Table <ref> we compare the values of ρ_loc and c_f for different values of frequency ω and shift α. We choose the rest of the parameters as our default parameters as chosen in section <ref>. To avoid a significant influence of the boundaries, we take a larger grid, with h=1/1024. As a reference value for the best case scenario, we use μ_loc^2 that corresponds to a two-grid cycle with one pre- and one post-smoothing, assuming an ideal coarse grid correction. §.§ 2D experiments In this subsection we demonstrate our approach, providing numerical results for several 2D models. In all experiments, we solve (<ref>) with physical attenuation of γ = 0.01 using FGMRES and use a W(1,1)-cycle for the shifted version as a preconditioner, with varying choices of shift α, depending on the setup (e.g., number of levels). We compare our discretization — namely, (<ref>) with β=2/3 — and the standard discretization with β=1, in terms of iterations count to reach the convergence criteria. We denote the number of grid-points per shear wavelength by G_s. In all the experiments we use G_s=10 for the standard discretization, and compare it with G_s=10 and G_s=8 using our discretization with β=2/3. For G_s=10, we choose damping parameters of w=0.55,0.35,0.25 for 2-, 3-, and 4-level respectively. For G_s=8, we take w=0.5,0.35,0.25. These values were chosen based on the best performance for our red-black relaxation. In the first experiment we use homogeneous media model: we solve (<ref>) with constant coefficients λ = 20, μ=ρ=1. We examine different grid sizes. In the case of two-grid cycles, we observe that our method converges significantly better with β=2/3 than with β=1, taking 10 grid points per wavelength, even when using a lower shift. Moreover, the performance of our stencil with only 8 grid points per wavelength is comparable to the performance of the standard stencil with 10 grid points per wavelength. The results are summarized in Table <ref>. In the second experiment, we apply a similar comparison on the following linear media model of size 4× 1: we take density ρ that varies linearly in the range [2,3], and Lamé parameters that varies linearly in the ranges 4≤λ≤ 20 and 1≤μ≤ 15. The results, summarized in <ref>, are very similar to the results in Table <ref>. Next, we give results of a similar comparison for the Marmousi-2 model <cit.>. This geophysical 2D model is considered as case study for real 3D ground models. Since the model is very shallow — only 3 km depth, we add an extension of 0.5 km on the bottom, to accommodate the absorbing boundary layer. We observe in Table <ref> that for two-grid cycles, the performance of our method with β=2/3 with G_s=8 is comparable β=1 with G_s=10, except for the 4-level method in the largest grid. §.§ 3D experiments In this subsection we give results for applying our system in 3D on the Overthrust velocity model described in Figure <ref>. This is originally an acoustic model, and includes only pressure wave velocity, we define V_s = 0.5V_p and ρ = 0.25V_p + 1.5. To implement the absorbing boundary layer, we add 16 grid points at the bottom of the domain. Before presenting the results, we briefly describe the generalization of our discretization (<ref>) for the 3D case. The generalization is not straightforward, as there are various options how to spread a derivative in one direction, say, x_1, to the other two directions, x_2,x_3. The technique we chose here seems to replicate the behavior of the spread stencil in 2D, but further investigation is still needed via dedicated LFA and is beyond the scope of this paper. We define β-spread stencil for the first derivatives, for instance in the x_1 direction, as: (∂_x_1)^β_h/2 = β[ -1 * 1 ] +(1-β) 1/16[ [ -1 1; -2 2; -1 1 ] [ -2 2; -4 * 4; -2 2 ] [ -1 1; -2 2; -1 1 ] ]. That is, we spread the x_1 derivative on both other directions x_2 and x_3. The derivatives in those directions are defined by stencils which are rotated versions of this stencil. Similarly to (<ref>), the spread mass is defined by the stencil M^β= ω^2 (1-γ) ( β[ 1 ] +(1-β)1/6[ [ ; 1 ; ] [ 1 ; 1 1; 1 ] [ ; 1 ; ] ]). The rest of the generalization is straightforward, where the main block in (<ref>) is a block diagonal matrix with three acoustic Helmholtz operators on its diagonal (rather than two). Since this is a system of equations, the linear systems are huge even at rather high mesh sizes, requiring a lot of memory. Our implementation is applied in mixed percision, where the top level FGMRES method is applied using double precision, and the multigrid preconditioner is applied in single precision. Furthermore, the coarsest grid solution, even using 3 or 4 levels, is a challenging task. Here, we use the hybrid Kaczmarz iterative method described in subsection <ref>, where we apply the solver until a residual drop of 0.1. More specifically, we use FGMRES(5), where each preconditioning is applied using 10 parallel hybrid Kaczmarz steps using a damping of 0.8 and 8 cores. We limit the solution to be at most 250 Kaczmarz iterations. Better handling of this coarsest grid solution is part of our future research. Here we use K-cycles to accelerate the solution and maximize each inexact coarse solution. Table <ref> shows the 3D results. Our method, even down to 8 grid points per wavelength, performs significantly better than the standard discretization with 10 grid points per wavelength. § CONCLUSIONS We introduced a novel discretization and a matrix-free multigrid method for the elastic Helmholtz equation in mixed formulation. We showed that our discretization, whose weights are tuned by two-grid LFA, is more suitable for shifted Laplacian geometric multigrid, yielding better performance compared to the standard finite-differences discretization for MAC staggered grid. We demonstrated that our discretization reduces numerical dispersion, especially on coarse grids. Our LFA results show that for 10 grid points per wavelength, the optimal weights in terms of multigrid convergence for the elastic Helmholtz equation in mixed formulation, coincides with the optimal weights in the acoustic version in terms of discretization error. We showed, numerically and theoretically, that our discretization allows the use of 8 grid points per wavelength, yielding at least the same performance as the standard stencil does for 10 grid points per wavelength. The stencil-based coarsening within the multigrid cycle makes our method easy to implement in a matrix-free form, hence suitable for parallel CPU and GPU computations. siamplain
http://arxiv.org/abs/2307.02443v1
20230705171300
An Exploratory Literature Study on Sharing and Energy Use of Language Models for Source Code
[ "Max Hort", "Anastasiia Grishina", "Leon Moonen" ]
cs.SE
[ "cs.SE", "cs.AI", "cs.CL", "cs.LG", "cs.NE" ]
An Exploratory Literature Study on Sharing and Energy Use of Language Models for Source Code Max Hort Simula Research Laboratory Oslo, Norway maxh@simula.no Anastasiia Grishina Simula Research Laboratory & University of Oslo Oslo, Norway anastasiia@simula.no Leon Moonen Simula Research Laboratory & BI Norwegian Business School Oslo, Norway leon.moonen@computer.org August 1, 2023 ===================================================================================================================================================================================================================================================================================================== Context: Large language models trained on source code can support a variety of software development tasks, such as code recommendation and program repair. Large amounts of data for training such models benefit the models' performance. However, the size of the data and models results in long training times and high energy consumption. While publishing source code allows for replicability, users need to repeat the expensive training process if models are not shared. Goals: The main goal of the study is to investigate if publications that trained language models for software engineering (SE) tasks share source code and trained artifacts. The second goal is to analyze the transparency on training energy usage. Methods: We perform a snowballing-based literature search to find publications on language models for source code, and analyze their reusability from a sustainability standpoint. Results: From a total of unique publications, we identified relevant publications that use language models to address code-related tasks. Among them, 27% (out of ) make artifacts available for reuse. This can be in the form of tools or IDE plugins designed for specific tasks or task-agnostic models that can be fine-tuned for a variety of downstream tasks. Moreover, we collect insights on the hardware used for model training, as well as training time, which together determine the energy consumption of the development process. Conclusion: We find that there are deficiencies in the sharing of information and artifacts for current studies on source code models for software engineering tasks, with 40% of the surveyed papers not sharing source code or trained artifacts. We recommend the sharing of source code as well as trained artifacts, to enable sustainable reproducibility. Moreover, comprehensive information on training times and hardware configurations should be shared for transparency on a model's carbon footprint. sustainability, reuse, replication, energy, DL4SE. § INTRODUCTION The FAIR data principles are designed to support and enhance the reusability of digital research objects following four guiding principles: to be findable, accessible, interoperable, and reusable <cit.>. While the initial focus of FAIR was on scientific data, the principles have been transferred to research software <cit.>. Publishing source code supports the replicability of software but may incur repeated training costs, if a software product is data-driven. Training costs can be especially high for tools that are trained on large amounts of data, such as Machine Learning (ML) models, which have achieved state-of-the-art performance in various disciplines (e.g., text and image understanding, video content prediction <cit.>). In particular, Deep Learning (DL) often achieves performance improvements by increasing the amount of training data and the size of the model, leading to long training times and substantial energy consumption <cit.>, with an increase in computational costs for state-of-the-art models by a factor of 300 000 between 2012 and 2018 <cit.>. This trend not only raises barriers for researchers with limited computational resources <cit.>, it is also harmful to the environment <cit.>. One class of DL models that benefit from training on large amounts of data are Large Language Models (LLMs). LLMs have been able to learn semantic information via training on texts from the Internet and achieve high performance on Natural Language Processing (NLP) tasks <cit.>. Similarly, by training language models on a large corpus of source code (e.g., as provided by GitHub[ <www.github.com>]), one can learn semantic information of source code <cit.> and apply the models on SE tasks, such as code generation, bug prediction and fixing, to alleviate developers from tedious work <cit.>. This research area is referred to as DL4SE, and the models are referred to as Source Code Models (SCMs). Training an SCM can take more than 100 days and incur high costs from hardware and energy requirements <cit.>. From an energy usage point of view, only sharing the source code to train the model is wasteful, because replication or reuse requires repeating the expensive and energy-consuming training process. Instead, trained models should be considered digital artifacts that must be shared to lower the bar for building on existing work <cit.>. For instance, fine-tuning an existing task-agnostic model requires only a fraction of the computational costs of training such a model from scratch <cit.>. Despite the benefits of sharing the trained models and source code, a large number of studies in DL, including many in DL4SE, do not make code or models publicly available. Liu et al. <cit.> surveyed deep learning studies in SE conferences and journals. They found that 74.2% of the studies did not share the source code and data for replication. Failing to share the data or trained artifacts contradicts the software sustainability-quality characteristics <cit.>. Software sustainability is defined from economic, environmental, social and technical dimensions in that software should generate economic value, enable equal and sustained access to social resources, minimize harm to the environment, and ensure technical improvements and maintainability <cit.>. In this study, we focus on the technical and environmental aspects of sustainability in software, namely reusability and efficiency <cit.>. To investigate the reusability and resource efficiency of source code models, we perform an exploratory literature search of existing DL4SE publications. For each publication, we investigate whether code and trained models are available, and what the training and energy requirements are. In other words, we focus on the following two research questions: * How many DL4SE publications share source code and/or trained models or related trained artifacts? * How much energy was used to train these models? The contributions of this paper include: * We conduct an exploratory study on the sustainability and reusability of (large) language models for source code. We analyze to what extent publications make trained artifacts available, so that software developers and researchers can reuse and profit from large models trained with high energy consumption without incurring such training costs themselves. * We investigate the information provided in publications with shared artifacts; * We estimate the energy needed for training models from 30 publications that provided sufficient information; * We summarize the lessons learned while studying the academic literature with this focus on sustainability; * We provide recommendations to help researchers make their models more sustainable and support clearly communicating the relevant aspects in their publications. The remainder of the paper is organized as follows: Section <ref> presents related work on sustainable software engineering and energy consumption of ML models, in particular LLMs. Section <ref> describes the literature search procedure. The results of the search are presented in Section <ref> (task-specific models) and Section <ref> (task-agnostic models), and discussed in Section <ref>. Section <ref> discusses the threats to validity, followed by an overview of lessons learned (Section <ref>) and recommendations (Section <ref>). We conclude in Section <ref>. § RELATED WORK §.§ Sustainable Software Engineering Sustainable software engineering addresses sustainability in two regards: (1) creating software that empowers sustainable applications and (2) creating software in a sustainable resource-efficient way. The former is referred to as Information Technology for Green (IT for Green) <cit.> or sustainability BY software <cit.>. The latter is called Green IT <cit.> or sustainability IN software <cit.>. In this study, we focus on sustainability IN software and use the term sustainable software or sustainable software engineering to define reusable shared software that is built with resource usage considerations in mind. Development of sustainable software can be supported by integrating sustainability goals in the development process <cit.>. One way to improve the sustainability of software is to optimize its performance by refactoring the source code, which can have positive impacts on the accompanying energy consumption <cit.>. For example, Verdecchia et al. <cit.> showed that refactoring code smells in Java applications can reduce energy consumption by almost 50%. Sustainable software engineering addresses sustainable goals with two regards: creating software that empowers sustainable applications, also referred to as Information Technology (IT) for Green <cit.>, or creating software in a sustainable resource-efficient way, also called sustainable software or Green IT <cit.>. Sustainable software is software that aims at reducing the negative environmental impact of the software by improving the energy efficiency during software usage <cit.>, or maximizing positive effects by applying software products <cit.>. Development of sustainable software can be supported by integrating sustainability goals in the development process <cit.>. One way to improve the sustainability of software is to optimize its performance by refactoring the source code, which can have positive impacts on the accompanying energy consumption <cit.>. For example, Verdecchia et al. <cit.> showed that refactoring code smells could reduce energy consumption by almost 50% for Java applications. In addition to observing the energy consumed by applying software and potential positive effects by providing sustainable solutions, the energy consumed during the development process is of relevance as well, as pointed out by the GREENSOFT model <cit.>. The GREENSOFT model presents a life cycle for software products. Accordingly, a green software product should be sustainable during the course of the life cycle, including the software engineering process and the tasks developers address during implementation and maintenance. To alleviate their workload, they can use tools to automate and support software engineering tasks. In this regard, Martinez et al. <cit.> addressed the field of green software research by measuring energy consumption induced by development and maintenance activities, in particular Automated Program Repair (APR). APR is used to fix software bugs, which usually incur a high monetary cost to resolve, without requiring manual intervention of developers. While APR tools tend to report their performance in terms of number of bugs they are able to fix, Martinez et al. <cit.> considered their energy consumption as an additional quality measure. To evaluate the trade-off between accuracy and energy consumption of APR tools, they computed the energy cost for each point of accuracy (i.e., energy consumption divided by accuracy). §.§ Energy Consumption of Machine Learning Models The energy consumption of training and developing ML models is becoming a growing concern <cit.>, with models requiring large amounts of computational resources to train, causing financial costs and emissions <cit.>. Recently, implementation challenges and leaderboards have been introduced to incentivize the development of energy efficient models <cit.>. Another proposition is to measure the performance of ML models not only with regard to accuracy, but also to consider energy consumption and trade-offs between the two metrics. To account for the sustainability–accuracy trade-off, Gutiérrez et al. <cit.> analyzed the impact of changing solvers for ML models. Having applied the models to credit card fraud data, they found configurations that required 2.9x more energy while improving accuracy by only 0.016. This illustrates that developers can make trade-offs between energy consumption and ML quality measures, such as precision and recall. In the same line of research, Georgiou et al. <cit.> compared the energy consumption of two frameworks (TensorFlow, PyTorch) for the development of DL by implementing and comparing the performance of six machine learning models. Energy consumption varied significantly in both the training and inference stages, with TensorFlow requiring less energy for training and PyTorch less energy for inference. However, the framework documentation did not provide information on hardware specifications to allow developers to select models and frameworks with regard to energy requirements. Verdecchia et al. <cit.> modified the underlying datasets for training DL models to reduce energy consumption during training. Results showed that reducing dataset size, either in the number of features or number of data points, improves the energy efficiency by up to 92%, while having a negligible effect on accuracy reduction for selected algorithms. Garcia-Martin et al. <cit.> investigated the impact of parameter tuning on energy consumption and accuracy for the Very Fast Decision Tree algorithm. In some cases, small reductions in accuracy (< 0.1) can reduce energy consumption by more than 70%. For an overview of publications addressing Green AI (AI systems developed with sustainability and costs considered), we refer to the systematic review by Verdecchia et al. <cit.>. §.§ Energy Consumption of Large Language Models To support responsible NLP, Zhou et al. <cit.> proposed the platform Hulk for benchmarking pre-trained language models in terms of time and cost. Processing time and costs are measured according to cloud services' hardware specifications and resource consumption. The cost of NLP models is evaluated at three stages: pre-training, fine-tuning, and inference. Pre-training is the most expensive stage in the development of language models: it can take several days and can cost up to 75,000$. However, once pre-trained, a model can be fine-tuned for several tasks, which requires less computational resources. Strubell et al. <cit.> provided insights on the financial and environmental costs of training large language models for NLP tasks. In particular, they estimate the training cost in USD and carbon emissions for four open-source models. For example, training large language models with neural architecture search can cause emissions 17 times as high as the average per-capita consumption in America. Given the high cost of NLP models, Strubell et al. formulated three actionable recommendations: (1) authors should report training times to allow for a cost-benefit analysis rather to solely focus on accuracy; (2) researchers need equal access to computational resources; (3) efficient hardware and algorithms should be prioritized. § LITERATURE SEARCH To find relevant literature, we adopt and adapt a snowballing search procedure <cit.>. We make small adjustments to the search procedure described by Wohlin et al. <cit.>, as we aim to build on four recent surveys in the domain of deep learning models for software engineering tasks. The surveys examine different research questions than ours but consider the same domain, which makes them good starting points. Moreover, we apply the inclusion and exclusion criteria after each snowballing step to control the scope of the search, as it can quickly become too wide and cover all of SE. Figure <ref> presents an overview of the search procedure and the number of publications collected. The search and subsequent information extraction were conducted by the first two authors; the third author helped mitigate classification discrepancies where needed. In step 1, we select the four survey papers that seed the study. We include both published work and arXiv preprints to ensure timeliness. The four seed surveys are: * Chen and Monperrus <cit.>: A literature study on embeddings learned from source code. Embeddings have been trained on different levels of granularity (e.g., binary code, tokens, functions). A list of 21 publicly available embeddings is provided. * Sharma et al. <cit.>: A survey of ML techniques for analysing source code. A total of 364 studies published from 2002–2021, divided over 12 SE tasks. For each task, data collection, feature extraction and model training stages are outlined. They listed 61 tools for analyzing source code and applying ML techniques. * Watson et al. <cit.>: A literature review of deep learning approaches in SE research. A total of 128 deep learning publications spanning 23 SE tasks have been reviewed. * Niu et al. <cit.>: A survey on pre-trained models on source code applied to SE tasks. They presented a total of 20 pre-trained code models that have been applied to 18 tasks. These four initial studies contain references to a total of 676 publications (step 2). After deduplication, we consider a total of unique publications for further investigation based on their title (step 3). We deem a paper of interest for further analysis if the title matches the following inclusion criteria: * the publication addresses an SE task, and * the publication applies a deep learning technique. To filter relevant publications, we read all publications, published from 2012-2022, and flag publications for exclusion if they did not train language models for source code, i.e., the exclusion criterion for step 4 is: * the publication does not train a source code model. This step leaves us with publications for further analysis. In step 5, we extract information from the publications to determine how they share artifacts. First, we investigated if source code is available. For this purpose, we analyzed the respective publications for links or references to external sources (e.g., a GitHub repository for source code, or Zenodo for datasets and tools). Among the publications, 33 publications did not provide source code (“unavailable” in Fig. <ref>).[ To ensure that no artifacts were overlooked, we performed additional Google searches for publications that did not mention artifacts for replication.] Next, we determined if the shared artifacts do not only provide source code, but include fully functional tools or checkpoints for ML models that are ready to use, without the need to be trained. This was the case for 35 out of the publications (“reusable” in Fig. <ref>). The remaining 40 publications provided source code but no trained artifacts (“reproducible” in Fig. <ref>). The initial survey-based search is followed by repeated backward snowballing in steps 6 & 7. During snowballing, we collect additional relevant publications that have been cited by the 35 publications which provided trained artifacts collected prior. Snowballing is performed incrementally on publications that share trained artifacts, until no new publications are found (i.e., we perform multiple iterations, and stop when a fixed point is reached, which happened after four iterations). This yielded 292 additional publications (published from 2002-2023) that fit the inclusion criteria, bringing the total to (202 from step 3 and 292 from step 6). After further inspection, 107 of the 292 additional publications did not train a language model and were excluded. Furthermore, 83 of those publications did not provide source code, and 58 of the publications shared source code but not the trained artifacts. Thus, repeated snowballing adds 44 publications that share trained artifacts, bringing the total to publications with shared artifacts, published from 2015 to 2022. We classify these publications with respect to the 11 SE tasks presented in Table <ref>, which were inspired by Niu et al. <cit.>. To address these tasks, source code models were trained on 18 different programming languages. The most frequent languages include Java (45 publications), Python (32 publications), and C and/or C++ (18 publications). Figure <ref> presents the number of publications for each combination of programming language and SE task (e.g., an approach trained on Java source code for code completion). We found that there are two types of trained artifacts that were publicly available: (1) trained ML models and tools; (2) source code embeddings. While trained models and tools are aimed at a specific task, source code embeddings are task-agnostic and provide comprehensive code representations for training future models with less effort than generating pre-trained embeddings <cit.>. Section <ref> (task-specific tools) and Section <ref> (task-agnostic embeddings) present detailed information for the two types of shared artifacts. [style=mystyle] Answer to RQ1 Out of the reviewed publications, 33% shared source code and 27% shared trained artifacts. § TASK-SPECIFIC CODE MODELS This section presents approaches with shared artifacts that are designed to address specific tasks. In total, we collected 52 task-specific publications, which are summarized in Table <ref>. Publications are presented with regards to the task they address and their respective programming language is shown, as well as hardware configuration and training time, if provided. Among the 52 publications, two publications shared artifacts for more than one task. Hoang et al. <cit.> proposed CC2Vec, an approach for representing code changes. For each of the three tasks (log message generation, bug fixing patch identification, and just-in-time defect prediction), they trained and shared a separate model. Huang et al. <cit.> first introduced a new dataset called CoSQA, consisting of 20,604 human-annotated labels for natural language and source code pairs. Additionally, they proposed a model, CoCLR, trained on two tasks: code search and question answering. Their GitHub repository provides model checkpoints for both of these tasks. The most frequently addressed tasks are concerned with faulty programs: code repair and defect prediction. Ten publications proposed approaches for code repair and nine publications addressed defect prediction. The task with the fewest available artifacts is code translation. Only Lachaux et al. <cit.> shared their TransCoder models for translating between three programming languages (Java, C++, Python). To allow for the translation of each pair of languages, they shared two models: 1) translate C++ → Java, Java → C++, Java → Python; 2) C++ → Python, Python → C++, Python → Java. The most popular programming languages, among 11 unique languages considered by the 52 publications, are Java (23 out of 52 publications), C/C++ (14 out of 52 publications), and Python (14 out of 52 publications). In detail, 42 publications considered one programming language, while ten publications were applied to more than one language: six publications considered two programming languages, one publication considered three languages, and three publications considered four languages. This results in an average of 1.33 programming languages considered per publication. In addition to programming languages considered, we collect training details, such as hardware used and training time for each publication. However, those are not always provided. There are 22 out of 52 publications without hardware details (42%) and 26 out of 52 without training time (50%), 33% shared neither information (17 out of 52 publications). The training time of 26 publications with such details ranges from two hours or less <cit.> to hundreds of hours <cit.>. While it is common to perform training on GPUs, there are four publications that did not use any GPU for their training procedure, published from 2015–2019 <cit.>. Commonly, publications used a single GPU for training <cit.>, sometimes in combination with CPUs. The highest amount of GPUs have been used by Svyatkovskiy et al. <cit.>. They utilized 5 Lambda V100 boxes, with 16 V100 GPUs each, resulting in 80 GPUs. While we focus on the training procedure and the energy associated with creating and sharing an ML model, we note the application of such models can vary highly for different SE tasks. Usually, the reported tested times are lower than the required training time (e.g., more than 100 times quicker than training <cit.>), but in particular, program repair experiments can require long testing times. For example, Chen et al. <cit.> applied Sequencer for 130 hours to find patches for 75 bugs. White et al. <cit.> applied their program repair tool DeepRepair for 2,616 days. Data extraction and preparation steps can also require considerable amounts of time and compute resources, ranging from 5-12 days <cit.>. The majority of task-specific publications provided access to the full trained models, some of which one needs to request access to <cit.>. Moreover, there are approaches shared as online tools <cit.> or IDE extensions <cit.>. There are also 12 out of 52 publications that did not share the full model, but trained embedding files, which are used by the model. These are marked in Table <ref> with the † symbol. § TASK-AGNOSTIC CODE MODELS This section presents task-agnostic code models which share means of representing source code as embeddings, for a variety of downstream tasks. These models are able to transform code snippets to embeddings, which can be fine-tuned to SE tasks. For example, Lu et al. <cit.> provided fine-tuning details for the CodeXGLUE benchmark, with information for task-specific training and inference time for each task.[ <https://microsoft.github.io/CodeXGLUE/>] The fine-tuning time ranges from 2 GPU hours (defect detection) to 60 hours (text-to-code generation, documentation translation). In total, we collected 27 task-agnostic models, as shown in Table <ref>. For each publication, we list the model name and the programming languages it was trained on. If available, we list details on hardware configuration and training times. Among the 27 publications, 52% did not provide training time details (14 out of 27) and 26% did not provide their hardware configurations (7 out of 27). For publications without hardware details, training time is not reported as well. Among the publications that shared training time details, the shortest duration is found for code2vec <cit.>, which was trained for 1.5 days and a single GPU. However, training large models can usually take weeks, up to 87 days for CodeTrans <cit.> and 3.5 months for BLOOM <cit.>. The long training time of BLOOM can be explained by the fact that it was trained on the highest number of programming languages (13 programming languages) in addition to 46 natural languages. Thereby, BLOOM is also the model trained on the highest number of programming languages, as it was trained on 13 out 14 programming languages we observed. BLOOM was not trained on LISP, which was only considered by CodeTrans <cit.> On average, each task-agnostic model is trained on source code data from 3.6 programming languages. Moreover, 10 out of the 27 publications train on a single programming language, which in 6 out 10 cases is Java. In comparison to task-specific models, task-agnostic models are trained on more programming languages, 3.6 in comparison to 1.3 programming languages on average, and require a higher computational effort. In addition, publications that provide task-agnostic models for embedding source code are more likely to share hardware configurations than publications with task-specific models. The proportion of publications without training time details is comparable for both types (50% and 52%, for task-specific and task-agnostic models, respectively). Another difference is that task-agnostic models use more sophisticated hardware for training, with each publication using either GPUs or TPUs. Only one publication considered CPUs in addition to GPUs for training <cit.>. § DISCUSSION To discuss the various facets of RQ2, we consider three aspects: (A) How much energy do task-specific and task-agnostic models consume? (B) To what extent do studies on source code models take sustainability concerns into account? (C) When is sharing a model more efficient than re-training? §.§ Energy Usage of Task-specific vs. Task-agnostic Models First, we perform a comparison of the energy consumed by training task-specific and task-agnostic models. For this purpose, we collect all publications that provide hardware and training time details, such that we can estimate the consumed energy in kilowatt-hours (kWh). In total, 30 publications provide sufficient information.[Note that we did not contact authors to provide missing information.] To estimate energy consumption, we used the Green Algorithms calculator <cit.>.[ <https://www.green-algorithms.org/>] This calculator is designed to estimate the carbon footprint and energy needed to run algorithms based on the number and type of CPU/GPU cores, runtime, available memory, and platform run on (PC, local server, cloud). It is also possible to consider the location for training and running algorithms, because the energy mix in the grid impacts the carbon footprint. In contrast to the Machine Learning Emissions Calculator <cit.>, the Green Algorithms calculator provides averaged options when details are missing (e.g., “world” if the country is unknown, “Any” CPU type if the type is not known), which is beneficial for estimating energy consumption if these details are missing. Our estimates report the energy needed in kWh with the default location set to “world”, because server locations are seldom reported. We share energy usage estimations in the last column of Table <ref> and Table <ref> for task-specific and task-agnostic artifacts, respectively. Hardware specifications required by the Green Algorithms calculator are incomplete in the majority of studies considered. Most of the models are trained using a type of accelerator, such as GPU or TPU. Four studies reported cloud provider utilization, while the other studies used different server configurations. To this end, we make assumptions about the missing specifications based on the standard CPU and GPU values stated in product descriptions on web pages of Intel and NVIDIA. In case the calculator does not cover a specific CPU type, we fetch Thermal Design Power (TDP) information from the manufacturers' website, to estimate the power used per core. In addition, for publications that used both CPU and GPU for training their models, we consider both to be active during the entirety of the training time, unless stated differently. We use the specifications reported in Table <ref> unless stated otherwise by the publications. Figure <ref> illustrates the energy consumed for training for each of the 30 publications. Of these, 12 provided task-agnostic models and 18 task-specific models. Among the task-specific models, 5 only provided partially trained artifacts (e.g., embeddings that are used for later training), and therefore require additional training effort before usage. [style=mystyle] Answer to RQ2-A 30 out of publications share sufficient information to estimate their energy consumption during training. Among these, the training of task-agnostic models used more sophisticated hardware (GPUs and TPUs) and required more energy. §.§ Sustainability Concerns Considered in DL4SE Studies In Section <ref>, we outlined publications that provided sufficient information to estimate the energy required to replicate their models (i.e., hardware and training time). While this is important to understand how high the energy requirements are, it does not illustrate whether the resource usage is sustainable, or whether sustainability was taken into account. Only in a few cases do authors consider the sustainability of the training process and the carbon footprint caused. Here, we present all three publications that, in addition to providing pre-trained artifacts, mention sustainability concerns when training. All of these three trained and provided large task-agnostic models, two of which required “hundreds of petaflop/s-days of compute” <cit.> or more than a million GPU hours for training <cit.>. Chen et al. <cit.> trained Codex on Azure, which purchases carbon credits and uses renewable energies to reduce the carbon footprint. Using the pre-trained Codex model for repeated inference could exceed training costs. Wang et al. <cit.> stated that the experimental design followed the objective of avoiding unnecessary computation, by creating smaller-sized models in comparison to existing ones, such as Codex. Moreover, training has been conducted on the Google Cloud Platform, which purchases carbon credits to offset the 49.25kg caused by training CodeT5. Le Scao et al. <cit.> considered various sustainability aspects during the creation of BLOOM: equipment manufacturing, model training, model deployment. The 81 tons of needed for training BLOOM can be attributed to 14% equipment manufacturing, 30% training, 55% idle energy consumption. Training benefits from France's energy grid, which uses nuclear energy in a large proportion, as a low-carbon energy source. Further details on the carbon footprint of BLOOM are provided in a dedicated study by Luccioni et al. <cit.>. [style=mystyle] Answer to RQ2-B Three publications covered sustainability concerns of the training process in addition to providing trained models. For example, they used cloud providers that purchase carbon credits or calculated emissions resulting from training the shared models. §.§ When is Sharing Models More Efficient than Re-training? In this section, we provide an exemplary scenario to compute and compare the energy required for training and storing a task-specific and task-agnostic model. We also show the energy used for downloading shared artifacts, to illustrate the energy-saving capabilities of sharing models trained on code. In accordance with the energy estimates for the training process in Section <ref>, we used the calculator provided by Lannelongue et al. <cit.>. To determine the energy consumption of training and sharing language models, we followed Lakim et al. <cit.> who provided an assessment of the carbon footprint for the Arabic language model Noor. Data storage energy consumption estimates are based on the cloud storage energy consumption reported by Posani et al. <cit.>, with a mean operating peak power of 11.3 W/TB. This measure includes a redundancy factor of 2 (i.e., an additional copy is stored) and Power Usage Effectiveness (PUE) of 1.6. Per year, this results in the energy consumption of 99 kWh per TB of data. Following the formula by Baliga et al. <cit.>, Posani et al. <cit.> estimated the energy consumption of data transfers to be 23.9kJ/GB, with 1kJ being equal to 1/3600 kWh. In Table <ref>, we illustrate the exemplary energy consumption of sharing a tool (500 MB) and a large task-agnostic model (5 GB) over the span of one year. Note that we only consider the energy consumed by training and data storage. Other aspects, such as the manufacturing of hardware components, are omitted. Therefore, our example presents a reduced estimate of the complete energy consumption of the entire model lifecycle. One also needs to consider the rebound effect depending on the number of downloads when estimating potential energy savings <cit.>. If trained models are downloaded because it is easy rather than necessary, then excess downloads can cause higher energy consumption than the initial model training caused. In our example, this is the case after 1,247 downloads for the task-specific model and 20,544 downloads for the task-agnostic model. While 20,544 downloads may sound like a large number, CodeBERT <cit.> was downloaded 1,982,300 times from Hugging Face in January 2023.[ <https://huggingface.co/microsoft/codebert-base>] [style=mystyle] Answer to RQ2-C A rebound effect happens when a shared model is downloaded too many times. For example, energy usage for storage and downloading of a 500 MB-size task-specific model is higher than re-training it after ca. 1,130 downloads. § THREATS TO VALIDITY This section discusses the threats to validity of this mapping study based on the categories identified by Zhou et al. <cit.>. Internal Validity Internal validity refers to threats to the validity of results presented in this work, for example, due to missing relevant publications during the literature search stage <cit.>. To mitigate this threat, we use a systematic process, starting our literature search with four comprehensive surveys on machine learning approaches for the SE domain. These provide an overview of relevant publications from 2022 and prior. Moreover, we apply four stages of snowballing to find additional references. This allows us to gather previous approaches which shared their artifacts, but there is a chance that we miss more recent works that have not been cited by any publication in our corpus, as we did not perform forward snowballing. While this can slightly alter our results, we are hopeful that recent works are more likely to share artifacts than publications from the past 10 years. External Validity External validity addresses the domain to which our findings can be generalized to. While our study focuses on the sustainability of shared artifacts for LLMs on code, our results confirm observations of related studies, such as high energy consumption in training LLMs for NLP tasks <cit.> and a lack of shared artifacts of DL studies for SE tasks <cit.>. Therefore, we hope our findings and recommendations are beneficial beyond LLM models for code. Construct Validity Construct validity is concerned with the quality of measures chosen to study the construct of interest. In our case, we were first interested in whether and which artifacts are shared (i.e., none, source code, trained models). For this purpose, we considered the absolute amount of publications with respect to the amount of shared artifacts, which coincides with the construct we want to measure. Afterwards, we estimated the energy requirements in kWh for training language models, for which we used the Green Algorithms calculator <cit.>. For the validity of estimates, we assume the correctness of the calculator and the information specified in the respective publications (i.e., type of CPU/GPU and training time). If the information provided was not sufficient, we had to make choices for available memory and hardware parameters. All such choices are provided in Table <ref>, to make our kWh estimates reproducible. Conclusion Validity Conclusion validity describes whether the operations performed and obtained results in this study (e.g., literature search, data collection) can be reproduced <cit.>. Section <ref> outlines our literature search procedure, starting from four existing surveys, followed by iterative snowballing steps. We list our inclusion and exclusion criteria, to allow for a reproducibility. Moreover, we provide a link to the collected publications and extracted information in the Data Availability section, such that our search results can be verified. To allow for the reproducibility of observations and results, we provide all the relevant extracted information in Tables <ref> and <ref>. When information is insufficient, we provided all assumptions over hardware specifications in Table <ref>. § LESSONS LEARNED 1. In general, shared information on the amount of energy consumed by the training and use of SCMs is limited (RQ1; some notable exceptions, such as BLOOM <cit.>, RQ2-B). Specifically, CPU and GPU details are missing or incomplete in the majority of papers, while they are crucial for energy usage estimation. Even with hardware details and training time available, it is hard to make accurate estimates of energy consumption and footprint since other missing factors, such as the server location, impact the estimation (RQ2-A). 2. From the data that is shared, we see that SCMs are extremely energy-intensive to train due to computational requirements, in particular, when compared to the energy required for downloading shared artifacts (RQ2-C). It is therefore important that researchers share their artifacts (RQ1), including pre-trained and fine-tuned models, as well as explore ways to reduce their energy consumption, such as training in clouds with low emissions (e.g., hydro-powered). 3. In general, we find that the larger the model, the higher the energy consumed for its training (RQ2-A), increasing the importance to share model artifacts to ensure sustainability. 4. Not only the energy consumption of training but also that of long-term storage of pre-trained models and datasets, as well as of their downloads should be considered (RQ2-C). 5. On the positive side, SCMs provide ample opportunities for collaborative and cooperative efforts. Sharing artifacts in the end can lead to higher sustainability than when all users would develop their models independently. More work and data is needed to be able to analyze this trade-off, which is why there is a need for a series of guidelines or a checklist to help people systematically report on the environmental/sustainability impact of their techniques. § RECOMMENDATIONS 1. Define the scope of the research and the intended application of task-agnostic or task-specific SCMs to ensure a good understanding of the intended tasks and reuse potential. 2. Establish a set of clear and transparent metrics for energy consumption and sustainability to ensure systematic, accurate, and reliable reporting. 3. Specify details of the hardware and software configuration used for the training and inference of SCMs, including the exact types of the processors and accelerators, memory and the number of cores for CPU (e.g., Intel i7-8700 CPU, 6 cores, 32GB memory), the model and memory for GPUs (e.g., 1 NVIDIA Titan X GPU, 12GB), as well as storage media and infrastructure (RQ2-A). 4. Provide energy consumption measurements <cit.> or estimations for both training and inference (RQ2-A). Use existing proven calculators <cit.> and provide complete details in the paper, not just the final result, so that the computation can be repeated if an improved calculator becomes available. 5. Document the footprint associated with energy consumption, considering energy sources and carbon offsetting applied. For cloud infrastructures, this means including the provider and region, because these details vary by location. 6. Assess other environmental impacts of SCMs, including the amount of data and storage required and the impact on the (network) infrastructure (RQ2-C). 7. Provide (and promote) open access to data and models to foster collaboration and reduce duplication of efforts, thereby reducing the energy and resource requirements for SCM development and fine-tuning. Observe that several of these recommendations overlap with the recommendations for reproducible machine learning <cit.>, which also cover additional aspects. § CONCLUSION In this exploratory study, we have performed a snowballing study (i.e., four iterations of backwards snowballing) to find publications on language models for SE tasks, from which we gathered publications of interest. After applying our inclusion and exclusion criteria, we are left with studies, which we investigated further with regard to their reusability and sustainability (e.g., are trained artifacts shared?). We showed that there are deficiencies in the existing studies that train language models on source code regarding the transparency of sustainability aspects. Among the publications, only 27% provide trained artifacts to enable the reuse of their models without incurring the same amount of training effort; 40% of the reviewed publications provide neither source code nor trained artifacts. We collect training information from the surveyed publications, including the hardware configurations and training time. This allows us to estimate how much time and resources can be saved by reusing the artifacts or how many resources are needed to replicate the models. We have estimated the energy consumption for 30 publications that provided sufficient information (i.e., number and type of processors, training time), while only two publications provided details on energy consumption and of the model training <cit.>. We stress the importance of describing hardware configurations and processing times, so that even if energy consumption is not reported, one can estimate the required resources and judge whether one wants to spend effort to replicate ML models. This agrees with Bender et al. <cit.>, who called for the research community to prioritize the environmental and financial cost of deep learning systems, by reporting or evaluating them with regard to resource usage. Optimally, if a publication creates an ML tool or model with the clear intention of its reuse, it can be beneficial to make trained artifacts available. As shown, making small tools available for download and reuse can prevent unnecessary energy consumption as opposed to training tools from scratch. Future Work One possible direction for future investigation is an analysis of the literature that cites the energy calculators <cit.> mentioned earlier to assess if their use indeed leads to better communication of sustainability aspects. This could add further evidence to our recommendations. § DATA AVAILABILITY To support open science and allow for replication and verification of our work, an overview of the collected publications and the extracted information is made available via Zenodo.[ Replication package on Zenodo: <https://doi.org/10.5281/zenodo.8058668>. ] § ACKNOWLEDGEMENTS The research presented in this paper was financially supported by the Research Council of Norway through the secureIT project (grant #288787). Max Hort is supported through the ERCIM ‘Alain Bensoussan’ Fellowship Programme. § NOTES §.§ ideas for takeaway messages * provide exact numbers where possible (a few days, trained on GPUs). What information to provide to allow computation of energy * the bigger the model, the more information people "should" share to improve sustainability
http://arxiv.org/abs/2307.01394v1
20230703231103
In-depth Analysis On Parallel Processing Patterns for High-Performance Dataframes
[ "Niranda Perera", "Arup Kumar Sarker", "Mills Staylor", "Gregor von Laszewski", "Kaiying Shan", "Supun Kamburugamuve", "Chathura Widanage", "Vibhatha Abeykoon", "Thejaka Amila Kanewela", "Geoffrey Fox" ]
cs.DC
[ "cs.DC", "cs.AI", "cs.IR", "cs.LG" ]
iu]Niranda Perera niranda@niranda.dev uva,uvab]Arup Kumar Sarker djy8hg@virginia.edu uva]Mills Staylor qad5gv@virginia.edu uvab]Gregor von Laszewski laszewski@gmail.com uva]Kaiying Shan shankaiying@gmail.com iu]Supun Kamburugamuve supun@apache.org iu]Chathura Widanage chathurawidanage@gmail.com iu]Vibhatha Abeykoon vibhatha@gmail.com iu]Thejaka Amila Kanewela thejaka.amila@gmail.com uva,uvab]Geoffrey Fox vxj6mb@virginia.edu [iu]Indiana University Alumni, Bloomington, IN 47405, USA [uva]University of Virginia, Charlottesville, VA 22904, USA [uvab]Biocomplexity Institute and Initiative, University of Virginia, Charlottesville, VA 22904, USA The Data Science domain has expanded monumentally in both research and industry communities during the past decade, predominantly owing to the Big Data revolution. Artificial Intelligence (AI) and Machine Learning (ML) are bringing more complexities to data engineering applications, which are now integrated into data processing pipelines to process terabytes of data. Typically, a significant amount of time is spent on data preprocessing in these pipelines, and hence improving its efficiency directly impacts the overall pipeline performance. The community has recently embraced the concept of Dataframes as the de-facto data structure for data representation and manipulation. However, the most widely used serial Dataframes today (R, ) experience performance limitations while working on even moderately large data sets. We believe that there is plenty of room for improvement by taking a look at this problem from a high-performance computing point of view. In a prior publication, we presented a set of parallel processing patterns for distributed dataframe operators and the reference runtime implementation, <cit.>. In this paper, we are expanding on the initial concept by introducing a cost model for evaluating the said patterns. Furthermore, we evaluate the performance of on the ORNL Summit supercomputer. Dataframes High-performance computing Data engineering Relational algebra MPI Distributed Memory Parallel § INTRODUCTION Artificial Intelligence (AI), Machine Learning (ML), and the Big Data revolution have introduced an abundance of complex data engineering applications in the data science domain. These applications are now required to process terabytes of data and are orchestrated as an intricate collection of data engineering pipelines. To achieve this, a significant amount of developer time is spent on data exploration, preprocessing, and prototyping. Therefore, improving the efficiency of such activities directly impacts the overall data engineering pipeline performance. Databases and structured query language (SQL) have been the de-facto tool for data preprocessing applications. However, in the early 2000s, the focus shifted significantly towards Big Data toolkits and frameworks. These systems (eg. Hadoop <cit.> and map-reduce <cit.>, Spark <cit.>, Flink <cit.>, etc.) enabled more capabilities than traditional relational database management systems (RDBMS), such as functional programming interface, consuming large structured and unstructured data volumes, deploying in the cloud at scale, etc. Coinciding with the big data developments, enterprise and research communities have invested significantly in artificial intelligence and machine learning (AI/ML) systems. Data analytics frameworks complement AI/ML by providing a rich ecosystem for preprocessing data, as these applications require enormous amounts of data to train their models properly. In recent times, the data science community has increasingly moved away from established SQL-based abstractions and adopted Python/R-based approaches, due to their user-friendly programming environment, optimized execution backends, broad community support, etc. Dataframes play a pivotal role in this transformation <cit.> by providing a functional interface and interactive development environment for exploratory data analytics. Most dataframe systems available today (e.g. R-dataframe, Pandas) are driven by the open-source community. However, despite this popularity, many dataframe systems encounter performance limitations even on moderately large data sets. We believe that dataframe systems have now exhausted the capabilities of a single computer and this paves the way for distributed and parallel dataframe processing systems. §.§ Background: High-Performance Dataframes from Parallel Processing Patterns In the precursor publication, titled "High-Performance Dataframes from Parallel Processing Patterns" <cit.>, we presented a framework that lays the foundation for building high-performance distributed-memory parallel dataframe systems based on parallel processing patterns. There, we analyzed the semantics of common dataframe operators to establish a set of generic distributed operator patterns. We also discussed several significant engineering challenges related to developing a scalable and high-performance distributed dataframe (DDF) system. The main goal of this framework is to simplify the DDF development process substantially by promoting existing serial/ local operators into distributed operators following the said patterns. They primarily focus on a distributed memory and Bulk Synchronous Parallel (BSP) <cit.> execution environment. This combination has been widely employed by the high-performance computing (HPC) community for exascale computing applications with admirable success. Based on this framework, we developed , an open-source high-performance distributed dataframe system <cit.>. In this paper, we present an in-depth analysis of the aforementioned parallel processing patterns based on a cost model. We encapsulate the parallel processing patterns concept into " Distributed Operator Model" and present " Communication Model" which allows plugging-in multiple communication runtimes into distributed execution. These two aspects constitute the " Distributed Memory Execution Model", which we will discuss in detail in the following sections. Furthermore, we will introduce a cost model to evaluate the performance of distributed memory execution. In addition, we demonstrate the scalability of on leadership-class supercomputing environments, which affirms the significance of the underlying framework. We have also conducted a scalability analysis between and related state-of-the-art data processing systems. This analysis demonstrates the applicability of the design across the board, on both distributed computing and supercomputing infrastructure. In the following sections, we use to refer to its underlying high-performance DDF framework interchangeably. § DISTRIBUTED-MEMORY EXECUTION MODEL is based on the distributed memory parallel model, which isolates memory for each parallel process. These processes can manage their memory individually while communicating with others using message passing. This isolation makes distributed operator implementation easier to reason about. While it leaves room for improvement, especially using multi-threading execution, the results show that dataframes show superior scalability over the state-of-the-art systems. In addition, it is based on BSP execution in the distributed memory environment. Gao et al. <cit.> recently published a similar concept for scaling joins over thousands of Nvidia Graphical Processor Units (GPU). experiments demonstrate that this approach can be generalized to all operators and achieves commendable performance. Conceptually, we can divide distributed execution model into two distinct sub-models, 1. Communication Model, and 2. Distributed Operator Model. We will discuss the former in Section <ref> and the latter in Section <ref>. §.§ Distributed Memory Parallel Dataframe Definition The primary insight behind is to present a dataframe framework that promotes an already available serial (local) operator into a distributed memory parallel execution environment <cit.>. For this purpose, we formally defined a Distributed Memory Parallel Dataframe based on row-based partitioning in our previous publication <cit.>. This concept is depicted in Figure <ref>. The dotted lines represent the virtual collection of Partitions in the distributed memory parallel environment. Users would not see a separate distributed API object but instead, continue to write their program as they would work on a single partition. The execution environment determines if the operator needs to be performed locally or in a distributed fashion based on the operator's semantics. For example, Figure <ref> shows a Pandas script that reads data from two directories, joins them, sorts the result, and takes the top 10 rows. A corresponding script for distributed-memory Dataframes is shown in Figure <ref>. §.§ Apache Arrow Columnar Memory Layout uses Apache Arrow Columnar format as the physical data representation. This is an integral component of the memory model. It provides several benefits, such as data adjacency for sequential access (scans), O(1) (constant-time) random access, SIMD vectorization-friendly data structure, true zero-copy access in shared memory, etc. It also allows serialization-free data access from many language runtimes. Due to these benefits, many libraries including Pandas, PySpark <cit.>, CuDF <cit.>, and Ray <cit.>, are now using the Apache Arrow format. § COMMUNICATION MODEL In many dataframe applications, communication operations take up significant time creating critical bottlenecks. This is evident from our experiments (Section <ref>), where we evaluate communication and computation time breakdown applied to several dataframe operator patterns. Moreover, most frameworks (eg. Spark, Dask, Ray), provide special guidelines to reduce communication overheads (eg. shuffle routine) <cit.>. Therefore, careful attention has been given while developing the communication model for . BSP execution allows the program to continue independently until the next communication boundary is reached. Message passing libraries such as MPI (OpenMPI, MPICH, IBM Spectrum, etc), Gloo, and UCX <cit.> provide communication routines for memory buffers, which by extension support homogeneously typed arrays. The most primitive routines are point-to-point (P2P) message passing, i.e., tag-based async send and async receive. Complex patterns (generally termed collectives) can be derived on top of these two primitive routines (eg. MPI-Collectives, UCX-UCC). Unlike multi-dimensional arrays, heterogeneous data types in dataframes make communication routines more involved. The Arrow columnar data format represents a column by a tuple of buffers ( validity bitmap, offsets, & data). A dataframe incorporates a collection of such columns. Therefore, a communication routine would have to be called on each of these buffers. communication model outlines a set of communication collectives required to implement distributed memory parallel dataframes by inspecting the semantics of core dataframe operators. These are listed in Table <ref> together with their frequency of usage for each data structure. The key features of the communication model are, * Modular architecture: Allows plugging-in multiple communication libraries. * Extensibility: The communication model has been easily extended into Nvidia CUDA GPU hardware, in GCylon project. Figure <ref> depicts the overall architecture. §.§ Communicator The communicator interface manages communication routines (Figure <ref>). At the very top, the user API defines routines based on the data layer data structures, as described in Table <ref>. These are blocking routines for the user (e.g., will wait until completion). The communicator implements these routines using two abstract constructs, (1). channels (for point-to-point/ send-receive communications) and (2). collective communications. The former works only on byte buffers, and the collectives can also be implemented using these channels. In fact, is implemented using channels due to a mismatch in traditional . The abstract collective communications implement collective routines for composite data structures (tables, arrays, and scalars), using collectives on buffers. This abstract implementation allows to easily plug in multiple communication libraries that support BSP semantics, such as OpenMPI <cit.>, UCX <cit.>, and Gloo <cit.>. §.§ Abstract Channels Channels are designed to be used for composite buffer communications in a non-blocking manner. During the initialization, it registers two callbacks which inform the caller that (1). the sending has been completed, (2). the data is received for a particular buffer. It then accepts requests that contain the buffer address and metadata (such as buffer size, buffer index, etc.) to be sent. The caller then has to progress through sends and receives. First, the channel exchanges buffer metadata, which is used to allocate memory for receiving buffers. Later on, it starts exchanging data. Both these progressions use non-blocking send/receive routines. Once each receiving buffer completes, it will be passed on to the caller using the receive-callback. Channels give much flexibility to the caller to implement composite communication routines. However, there are disadvantages to this as well. Most importantly, each buffer collective routine must be implemented from scratch using channels. As listed in Table <ref>, we need to implement multiple communication algorithms to get the best performance for collectives. Managing such a custom communication library code base could be a cumbersome exercise. Currently, routine is implemented using the channels. §.§ Abstract Collectives Abstract collectives are higher-level communication abstraction that implements table, array, or scalar collectives using non-blocking buffer collective routines. For example, an can be implemented as a collection of non-blocking routines. To do this, we create a metadata structure with the buffer pointers, sizes, data types, etc. of the input table and call corresponding communication routines on each buffer. In the end, we recreate the resultant table based on the output buffers. §.§ Supported Communication Libraries Currently, communicator supports the following communication libraries that support BSP message-passing semantics. §.§.§ OpenMPI OpenMPI is a widely used open-source implementation of the MPI specification. It consists of two main components, (1). process management and (2). communication library. Currently, Process Management Interface Exascale (PMIx) standard <cit.> is used for the former, while various communication algorithms have been implemented (Table <ref>) as a part of the latter. It is a comprehensive communication library with a rich collection of communication routines for many distributed computing and HPC applications. communication model was also heavily influenced by OpenMPI. §.§.§ Gloo Gloo collective communications library is managed by Meta Inc. incubator <cit.> predominantly aimed at machine learning applications. PyTorch uses it for distributed all-reduce operations. It currently supports TCP, UV, and ibverbs transports. Gloo communication runtime can be initialized using an MPI Communicator or an NFS/Redis key-value store (P2P message passing is not affected). Gloo lacks a comprehensive algorithm implementation as an incubator project, yet our experiments confirmed that it scales admirably. We have extended the Gloo project to suit communication interface. §.§.§ UCX/UCC Unified Communication X (UCX) is a collection of libraries and interfaces that provides an efficient and convenient way to construct widely used HPC protocols on high-speed networks, including MPI tag matching, Remote Memory Access (RMA) operations, etc. Unlike MPI runtimes, UCX communication workers are not bound to a process bootstrapping mechanism. As such, it is being used by many frameworks, including Apache Spark and RAPIDS (Dask-CuDF). It provides primitive P2P communication operations. Unified Collective Communications (UCC) is a collective communication operation API built on UCX, which is still being developed. Similar to MPI, UCC implements multiple communication algorithms for collective communications. Based on our experiments, UCX+UCC performance is on par with or better than OpenMPI. § DISTRIBUTED OPERATOR MODEL distributed operator model provides the basis for elevating a local dataframe operator to a distributed memory parallel dataframe operator. This was the primary idea behind our precursor publication <cit.>. It comprises two key observations, * A distributed operator consists of three major sub-operators: * Core local operator * Auxiliary local operators * Communication operators For example, the bottom image in Figure <ref> shows how the distributed is composed of these sub-operators. * By examining the composition of these sub-operators, they can be categorized into several parallel execution patterns, as depicted in Figure <ref>. Therefore, rather than analyzing/ optimizing each operator, we can focus on these parallel patterns. In addition, some operators can be implemented using multiple algorithms that show distinctive parallel patterns (e.g., can be done by shuffling or by broadcasting). Hence, understanding these patterns is essential to choose the best runtime strategy. We believe understanding distributed dataframe operator patterns reduce the burden of parallelizing a massive API, such as Pandas. To address the same problem, Petersohn et al. <cit.> introduced a primitive set of dataframe operators that could be used as a basis for the rest, termed Dataframe Algebra. Our dataframe operator patterns are a complementary concept to dataframe algebra, as shown in Figure <ref>. §.§ Core Local Operator These refer to single-threaded implementations of primitive operators. There could be one or more libraries that provide this functionality, such as , , RAPIDS CuDF <cit.>, Acero (Apache Arrow Compute), etc, or locally developed as a part of . The choice of the library depends on the language runtime, the underlying memory format, and the hardware architecture. This is to prevent redundant development efforts for reinventing the existing functionality. §.§ Auxiliary Sub-operators Partition operators are essential for distributed memory applications. Partitioning determines how a local data partition is split into subsets so they can be sent across the network. This operator is closely tied with Shuffle communication routine. Hash partition, range partition, and rebalance are several key auxiliary operators. §.§ Parallel Processing Patterns & Operator Implementations According to our previous publication, dataframe operators can be broadly separated into three categories <cit.>, as described in Table <ref>. * Embarrassingly parallel: Operators that require no communication required * Loosely synchronous: Operators that require communication at some stage in its implementation. This is a broad category; therefore, it is separated into the following subcategories. * Shuffle-compute * Sample-shuffle-compute * Combine-shuffle-reduce * Broadcast-compute * Globally reduce * Halo exchange * Partitioned I/O: I/O operators in distributed memory parallel environments require communication to load balance data amongst the workers. § COST MODEL FOR EVALUATION A cost model can be applied to the distributed operator model to estimate the execution time/ cost of each operator pattern. As observed before, each pattern comprises three sub-operators. Hence, the total cost estimate (T_total) is the sum of the cost of each sub-operator. [c] T_total = T_core + T_aux + T_comm [c] 0em * T_core→ Core local operator cost * T_aux→ Auxiliary local operator cost * T_comm→ Communication operator cost We analyze the communication and computation cost of distributed dataframe operators in the subsequent sections, and the following notation has been used. 0em * P → Parallelism * N → Total number of rows * n = N/P → Number of rows per process * c → Number of columns (constant for row-partitioned data) * 𝐍 = N× c → Total amount of distributed work/ total data * 𝐧 = 𝐍/P → Work per process/ rows per process * 𝐂→ Cardinality of data §.§ Communication Cost (T_comm) Based on the literature, Hockney <cit.>, LogP <cit.>, and LogGP <cit.> are some of the most commonly used cost models to evaluate collective communication operations. Hockney model provides a simple communication cost estimation, and therefore, it has been used in many recent publications <cit.>. The model fails to capture the network congestion. However, it provides an adequate cost estimation to evaluate . The model assumes that the taken to send a message between any two nodes can be modeled as, 0.99 [c]0.29 T = α + nβ [c]0.69 0em * n → Message size/ number of bytes transferred * α→ Latency/ startup time per message (independent of n) * β→ Transfer time per byte Let us take Shuffle (AllToAll) for an example. uses non-blocking send-receive-based implementation. Each worker would shuffle 𝐧 data with others in P iterations. In each iteration, it would send and receive 𝐧/P amount of data (on average, for uniformly distributed data). Out of the P iterations, one iteration is a local data transfer. Therefore, T_shuffle = (P-1)(α + 𝐧/Pβ) = (P-1)α + (P-1)𝐧/Pβ Therefore, for row-partitioned data, T_shuffle = T_startup + T_transfer = O(P) + O(P-1/P× n) Table <ref> describes the communication costs of communication routines used in distributed dataframe operator implementations for multiple algorithms based on the Hockney model. It uses the definitions described in the Section <ref>. §.§ Computation Cost (T_core + T_aux) Core local operator cost (T_core) & auxiliary local operator cost (T_aux) constitutes the computation cost. Since these are local operations, the cost can be derived from time complexity of the algorithm. For example, a local operation would take (when using a quick-sort algorithm for uniformly distributed data), T_sort = O(n log_n) Table <ref> describes the time complexities of commonly used local dataframe operators (Core local operator cost, T_core) and their output size (n_new). §.§ Total Cost of Dataframe Operator Patterns We will look at the total cost of each operator pattern in the following subsections. §.§.§ Embarrassingly Parallel This is the most trivial class of operators since they do not require any communication to parallelize the computation. Select, Project, Map, and Row-Aggregation fall under this pattern. Arithmetic operations (ex: , , etc.) are also good examples of this pattern. Embarrassingly parallel distributed operators can simply call the corresponding local operator, and therefore the cost estimation of this pattern is, T_EP = O(n) §.§.§ Shuffle Compute This common pattern can be used for operators that depend on Equality/Key Equality of rows. Of the core dataframe operators, and directly fall under this pattern. In contrast, follows a more nuanced approach. Partitioning and shuffling communication routines rearrange the data so that equal/key-equal rows are on the same partition at the end of the operation. This guarantees that the corresponding local operation can be called at the end of the shuffling stage. Join, Union and Difference operators follow this pattern: Partition→Split→Shuffle→LocalOp Therefore, the cost estimation of shuffle compute for each worker is, T_shuffle_compute(hash) = O(n) + O(P) + O(P-1/P× n) + T_core T_shuffle_compute(range) = O(log_P) + O(n) + O(P) + O(P-1/P× n) + T_core Typically partitioning schemes (hash, range, etc.) are map operators and, therefore, access memory locations contiguously. These can be efficiently executed on modern SIMD-enabled hardware. However, the local operator may need to access memory randomly (e.g., a join that uses a hash table). Therefore, allowing the local operator to work on in-cache data improves the efficiency of the computation. This can be achieved by simply attaching a local partition block at the end of the . Partition→Split→Shuffle→Partition→Split→LocalOp A more complex scheme would be to partition data into much smaller sub-partitions from the beginning of the pipeline. Possible gains on each scheme depend heavily on runtime characteristics such as the data distribution. §.§.§ Sample Shuffle Compute This pattern is an extension of the shuffle-compute pattern. Sampling is commonly used for operators such as distributed . It gives an overview of the data distribution, which needs to be communicated among the other workers to determine an ordered (range) partition scheme. This can be achieved trivially by calling operation, or by a composite of communication & computation steps (eg. sample sort). Sample→ Communicate insights→ Partition→ Split→ Shuffle→ LocalOp uses multiple algorithms for distributed implementation. The data can be range-partitioned for numerical key columns based on a key-data histogram, and it would have the following total cost per worker. Sample→ Allreduce range→ Binning &Range part.→ Shuffle→ Local sort T_sort(range) = O(log_P) + O(n) + O(P) + O(P-1/P× n) + O(nlog_n) For the rest, uses sample sort with regular sampling <cit.>. It sorts data locally and sends a sample to a central entity that determines pivot points for data. Based on these points, sorted data will be split and shuffled. Finally, all executors merge the received sub-partitions locally. Local sort→ Sample→ Gather @rank0→ Calc. pivots @rank0→ Bcast pivots→ Split→ Shuffle→ Local merge §.§.§ Combine Shuffle Reduce Another extension of the Shuffle-Compute pattern, Combine-Shuffle-Reduce, is semantically similar to the map-reduce <cit.> paradigm. The operations that reduce the output length, such as Groupby and Unique, benefit from this pattern. The effectiveness of combine-shuffle-reduce over shuffle-compute depends on the Cardinality (𝐂) (i.e., the ratio of unique rows to the total length). It follows, LocalOp (interm. results)→ Partition→ Split→ Shuffle→ LocalOp with final res. The initial local operation reduces data into a set of intermediate results (similar to the Combine step in MapReduce), which would then be shuffled. Upon their receipt, a local operation is performed to finalize the results. The author also discusses this approach for dataframe reductions in a recent publication <cit.>. At the end of the initial local operation, the output dataframe size (in each worker) is O(n𝐂). Therefore, the total cost per worker would be, T_comb_shuf_red = T_core(n) + O(n𝐂) + O(P) + O(P-1/P× n𝐂) + T_core(n𝐂) §.§.§ Globally Reduce This pattern is most commonly seen in dataframe Column-Aggregation operators. It is similar to the embarrassingly parallel pattern but requires an extra communication step to arrive at the final result. For example, calculating the column-wise requires a local summation, a global reduction, and a final value calculation. LocalOp→ Allreduce→ Finalize Some utility methods such as distributed length and equality also follow this pattern. For large data sets, the complexity of this operator is usually governed by the computation rather than the communication. §.§.§ Halo Exchange This pattern is observed in window operations. A window operation performs an aggregation over a sliding partition of values. Pandas API supports rolling and expanding windows. For row partitions, the windows at the boundaries would have to communicate with their neighboring partitions and exchange partially computed results. The amount of data sent/received is based on the window type and individual length of partitions. §.§.§ Broadcast Compute Broadcast compute is a scaled-down pattern from shuffle-compute. Rather than shuffling, certain operators like broadcast-join can use broadcasting. This strategy only becomes useful when there is a smaller relation so that it can be broadcasted without shuffling the large relation. It reduces communication overhead significantly. However, broadcast-joins would perform poorly if the relations were of the same order. This effect was observed in Modin <cit.>, where out-of-memory errors are reported even for moderately large datasets because it only employs broadcast joins. Broadcast→LocalOp §.§.§ Partitioned I/O Partitioned Input parallelizes the input data (CSV, JSON, Parquet) by distributing the files to each executor. It may distribute a list of input files to each worker evenly. Alternatively, it receives a custom one-to-many mapping from the worker to input file(s). It reads the input files according to the custom assignment. For Parquet files, Partitioned Input tries to distribute the number of rows to each partition as evenly as possible when metadata is present. Suppose an executor does not receive data from reading. In that case, it constructs an empty dataframe with the same schema as the other partitions. In Partitioned Output, each executor writes its partition dataframe to one file. §.§ Runtime Aspects §.§.§ Cardinality Equality of rows governs the Cardinality of a Dataframe 𝐂, which is the number of unique rows relative to the length. Therefore, 𝐂∈ [1/N, 1], where 𝐂=1/N rows are identical and 𝐂=1 all rows are unique. In the Combine-Shuffle-Reduce pattern, the initial local operation has the potential to reduce communication order to n^' < n. This gain depends on the Cardinality (𝐂) of the dataframe 𝐂∈ [1/N, 1], which is the number of unique rows relative to the length. 𝐂∼1/N n^' n, making the combine-shuffle-reduce much more efficient than a shuffle-compute. Consequently, when 𝐂∼ 1 n^'∼ n may in fact worsen the combine-shuffle-reduce complexity. In such cases, the shuffle-compute pattern is more efficient. This incident is very evident from the cost model. T_comb_shuf_red = T_core(n) + O(n𝐂) + O(P) + O(P-1/P× n𝐂) + T_core(n𝐂) vs T_shuf_comp = O(n) + O(P) + O(P-1/P× n) + T_core(n) When, 𝐂→ 1 T_comb_shuf_red→ T_shuf_comp, and in fact, it is worse because the core local operation would have to be carried out twice. §.§.§ Data Distribution Data distribution heavily impacts the partitioning operators. Some executors may be underutilized when unbalanced partitions exist, affecting the overall distributed performance. Work-stealing scheduling is a possible solution to this problem. In a BSP environment, pseudo-work-stealing execution can be achieved by storing partition data in a shared object-store. Furthermore, some operations could employ different operator patterns based on the data distribution. For instance, when one relation is very small by comparison, Join could use a (broadcast-compute) rather than a hash-shuffle join (shuffle-compute) to achieve better performance. §.§.§ Out-of-Core Execution Currently, is limited by the memory available to the workers. With the data immutability guarantees, it always allocates new memory for the columns that get modified. Therefore, loosely synchronous patterns may require a workspace of 3-4× the size of the table. This could be a challenging requirement for memory-constrained environments and limits the dataset size we could process. Therefore, the system needs to be able to execute operators out-of-core. §.§.§ Logical Plan Optimizations A typical SQL query may translate to multiple Dataframe operators, and the application script can include several such queries. Semantically, these operators construct a DAG (directed acyclic graph) or a logical plan. SQL and data engineering engines generate an optimized logical plan based on rules (ex: predicate push-down) or cost metrics. While these optimizations produce significant gains in real-life applications, this is an orthogonal detail to the individual operator patterns we focus on in this paper. § EXPERIMENTS To evaluate the performance of distributed-memory execution model, we have conducted the following experiments. * Communication and computation breakdown of operators for strong and weak scaling * Running in Oak Ridge National Laboratory Summit supercomputer * Comparing performance against the state-of-the-art data processing systems For the following experiments, uniformly random distributed data was used with two columns in column-major format (Fortran order). Data uses a cardinality of 90% (i.e. 90% of rows are unique), which constitutes a worst-case scenario for key-based operators (eg. join, sort, groupby, etc). The main focus of these experiments is to micro-benchmark the distributed operator implementation. Using a generated dataset allows the input dataset to be uniformly distributed and thereby evaluate the true performance of the kernels. Barthels et al. followed a similar approach to evaluate distributed join kernels <cit.>. §.§ Communication & Computation These experiments were carried out on a 15-node Intel® Xeon® Platinum 8160 cluster. Each node comprises 48 hardware cores on two sockets, 255GB RAM, and SSD storage, and is connected via Infiniband with 40Gbps bandwidth. Figure <ref> shows communication and computation time breakdown for operation for a strong scaling test (1B rows per table). Moreover, Figure <ref> shows the same for a weak scaling test (25M per worker per table). Out of many operators, s have the most communication overhead, as it is a binary operator (2 input DFs). In the strong scaling plot, even at the smallest parallelism (32), there is a significant communication overhead (Gloo 27%, MPI 17%, UCX 17%), and as the parallelism increases, it dominates the wall time (Gloo 76%, MPI 86%, UCX 69%). Unfortunately, the author needed more expertise in the Spark, Dask, or Ray DDF code base to run a similar micro-benchmark. This experiment shows that communication plays a significant role in dataframe operator implementation. Despite using libraries specialized for message passing, still encounters significant communication overhead. Therefore, careful consideration must be given to communication while developing distributed dataframe runtimes. The weak scaling plot can further analyze the impact of communication performance. The work per process is fixed; therefore, we should see a flat graph. However, as we see in Figure <ref>, the time increases along the parallelism axis, indicating that the communication overhead increases. The graph on the right plots each stage (log-log). The local join computation is relatively flat, while both shuffle stages (left & right) show a linear increase. §.§.§ Examining the results using the cost model By looking at the cost model in Section <ref>, the cost of would be, T_shuffle = O(P-1) + O(P-1/P× n) T_join(sort) = O(P-1) + O(P-1/P× n) + O(n) + O(nlog_n) + O(n/𝐂) Substituting n=N/P, T_join(sort) = O(P-1) + O(P-1/P×N/P) + O(N/P) + O(N/Plog_N/P) + O(N/P𝐂) For strong scaling, N is constant. Therefore, as P increases, the components that depend on n (in computation and communication) reduce. This results in a downward trend in wall time. However, the O(P-1) component (coming from the communication cost) overtakes the gains of reducing n. This explains the increase in wall time in higher parallelisms. Similarly, for weak scaling, n is kept constant, which reduces the cost to O(P-1) + O(P-1/P). For the parallelism values tested in the experiments (Figure <ref>), this explains the increasing wall-time values and linear upward trends in shuffle timings. Even though the amount of data transferred per worker remains constant (n), the cost model does not account for network congestion. This could explain the increasing gradient at higher parallelisms. In the following sections, we will see that outperforms the state-of-the-art data engineering systems available today. However, the weak scaling indicates that still needs to improve on the communication operator performance (such as shuffle). It would be worthwhile evaluating other algorithms such as Pairwise Exchange<cit.>, Bruck<cit.>/ Modified Bruck<cit.>, etc., that have better time complexity as the parallelism increases. Another option would be to completely offload the shuffle implementation to the communication library (MPI, Gloo, UCX) and let the library decide which algorithm to choose based on runtime characteristics. §.§ on ORNL Summit Supercomputer was run on the Summit supercomputer at Oak Ridge National Laboratory (ORNL) as a part of large-scale testing. Each node in Summit consists of two IBM POWER9 processors and six Nvidia Tesla V100 accelerators, and there are 4600 of these nodes available for computation, reaching a theoretical peak double-precision performance of approximately 200 PF. Each node consists of 512 GB of RAM and 42 hardware cores. Figure <ref> shows the architecture of a single node in Summit. For workloads, only the CPU nodes were used. §.§.§ Setting up in Summit Setting up environment in Summit proved to be a tedious undertaking. Generally, is installed via a Conda Python environment <cit.>, which conveniently installs dependencies using the official Anaconda packages. However, due to the Summit node hardware architecture, some of these default packages were failing unexpectedly. Most notably, we encountered memory allocation errors from the Apache Arrow library. Since this is an essential requirement for , we had to rebuild Apache Arrow natively on Summit hardware architecture. This was done by the native installation script which uses PyPI () environment <cit.>. Additionally, Summit supercomputer uses its own MPI implementation based on IBM Spectrum MPI <cit.>. At the time, was tested on OpenMPI and Microsoft MPI only, and therefore, several minor changes were required to properly link with Summit MPI modules. The recommended way of using custom software in Summit is to create a module and load it (with dependencies) in batch scripts. However, this requires advanced expertise in Summit package management. We bypassed this requirement by installing and its dependencies into a PyPI environment using a login node. This PyPI environment resides in the user space in the file system. When submitting a batch job, we would activate this environment and run our script. Following is an example batch script for a workload. #!/bin/bash #BSUB -P <project name> #BSUB -W 1:30 #BSUB -nnodes 8 #BSUB -alloc_flags smt1 #BSUB -J cylonrun-s-8 #BSUB -o cylonrun-s-8. #BSUB -e cylonrun-s-8. module load python/3.7.7 gcc/9.3.0 source HOME/CYLON/bin/activate BUILD_PATH=HOME/cylon/build export LD_LIBRARY_PATH=BUILD_PATH/arrow/install/lib64:BUILD_PATH/glog/install/lib64:BUILD_PATH/lib64:BUILD_PATH/lib:LD_LIBRARY_PATH time jsrun -n((8*42)) -c 1 python HOME/cylon/summit/scripts/cylon_scaling.py -n 9999994368 -s s Both installation and batch scripts are available in the GitHub repository <cit.>. §.§.§ Strong Scaling A strong scaling experiment was carried out on operation of two 10 billion row tables. The size of each table is around 160GB. The parallelism was increased from 4 nodes (4×42=168cores) to 25 nodes (256×42=10,752cores). Figure <ref> plots the results on a log-log scale. Figure <ref> shows 10 billion rows per table experiment. As the parallelism increases from 168 to 2688, the wall time reduces almost linearly with fairly consistent timings. However, from thereon, the timings take a drastic turn and show a higher variance. From 5,376 onward, the computation component is less than 2 million rows per table per core. Therefore, communication would dominate the final wall time. To further analyze this scenario, another 50 billion rows per table experiment was carried out (Figure <ref>). There, smaller parallelism experiments were unsuccessful due to memory limitations. However, for higher parallelisms, the wall time reduces fairly linearly, as expected. This indicates that, as long as the computation dominates the communication, performance gains can be achieved by adding more resources. For 50 billion cases, the inflection point would occur at higher parallelism than 10752. §.§.§ Weak Scaling A weak scaling experiment was carried out again on operation. The intention was to utilize the memory available in the node allocation fully. Considering the 512GB RAM and 42 cores per node, it was decided to use 50 million row tables per core. The number of cores has been increased from 1 to 10752, where the last experiment joins more than 1 trillion rows from the two tables. The results are depicted in Figure <ref>. As we saw in the previous weak scaling experiments, the wall time increases with parallelism. This is not ideal for a weak scaling plot. However, the main culprit for this increase is the shuffle communication overhead. However, was able to successfully process more than 17 terabytes (TB) of data across 10,752 cores which is a commendable achievement. When looking at the throughput of the operation, it steadily increases to close to 12 million tuples/second. §.§ vs. the State-of-the-art In order to evaluate the performance of the distributed-memory execution model discussed in this paper, we performed a strong scaling analysis on several state-of-the-art distributed dataframe systems that are described in the related work section (Section <ref>). Experiments were also carried out on Pandas <cit.> to get a serial performance baseline. The following frameworks were considered. We tried our best to refer to publicly available documentation, user guides, and forums while carrying out these tests to get the optimal configurations. * Dask Distributed Dataframes v2022.8 * Ray Datasets v1.12 * Modin Distributed Dataframe v0.13 * Apache Spark (Pandas-on-Spark) v3.3 We have carried out similar strong scaling analyses in the precursor publication <cit.>, and several others <cit.>. In this publication, the results have been updated to the latest versions of software and their dependencies. The same 15-node Intel® Xeon® Platinum 8160 cluster described in Section <ref> was also used for these experiments. The following dataframe operator patterns were used for the experiments. When evaluating large-scale data engineering use cases (eg. TPC benchmarks <cit.>, Deep Learning Recommendation Model (DLRM) preprocessing <cit.>, etc) and based on our prior experience, these operator patterns <cit.> consume the majority of the computation time. * Shuffle Compute - Join operator * Combine Shuffle Reduce - GroupBy operator * Sample Shuffle Compute - Sort operator Figure <ref> depicts two sets of strong-scaling experiments. Left column represents tests on one billion-row dataset with all systems, while the Right column represents a smaller 100 million-row dataset with , Dask, and Spark systems. was using the UCX/UCC <cit.> communicator, as it shows the best distributed performance. Unfortunately, several challenges were encountered with running tests on Ray Datasets. It only supports unary operators (single input) currently. Therefore it has been omitted from Join experiments. Moreover, Ray did not complete within 3 hours, and did not show presentable results. Several issues came up with Modin as well. It only supports implementation, which performs poorly on two similar-sized dataframe Join. Only the Ray backend worked well with the data sets. Another observation was that Modin defaults to Pandas for Sort (ie. limited distributed scalability). The one billion-row strong scaling timings show that shows better scalability compared to the rest. Dask & Spark Datasets show commendable scalability for Join and Sort, however the former displays very limited scalability for GroupBy. A 100 million row test case (right column of Figure <ref>) was performed to investigate Dask & Spark further. This constitutes a communication-bound operation because the partition sizes are smaller. This reduces the computation complexity, however, these smaller partitions need to be communicated across the same number of workers. Under these circumstances, both Dask and Spark diverge significantly at higher parallelisms, indicating limitations in their communication implementations. There was a consistent anomaly in Spark timings for 8-32 parallelism. We hope to investigate this further with the help of the Spark community. We also observe that the serial performance of outperforms the rest consistently, which could be directly related to 's C++ implementation and the use of Apache Arrow format. At every parallelism, distributed performance is2-4×higher than Dask/Spark consistently. These results confirm the efficacy of the proposed distributed execution model in this paper. § RELATED WORK In a previous publication, we proposed a formal framework for designing and developing high-performance data engineering frameworks that include data structures, architectures, and program models <cit.>. Kamburugamuve et al proposed a similar big data toolkit named Twister2<cit.>, which is based on Java. There, the authors observed that using a BSP-like environment for data processing improves scalability, and they also introduced a DF-like API in Java named TSets. However, being developed in C++ enables the native performance of hardware and provides a more robust integration to Python and R. In parallel to , Totoni et al also suggested a similar HP-DDF runtime named HiFrames<cit.>. They primarily attempt to compile native MPI code for DDF operators using . While there are several architectural similarities between HiFrames and , the latter is the only open-source high-performance distributed dataframe system available at the moment. Dask <cit.> is one of the pioneering distributed dataframe implementations out there. It provides a Pandas-like API and is built on top of the Dask distributed execution environment. CuDF <cit.> extends this implementation in Dask-CuDF to provide distributed dataframe capabilities in Nvidia GPUs. Modin <cit.> is another dataframe implementation built on top of Dask and Ray. It provides an API identical to Pandas so that existing applications can be easily ported to a distributed execution. Apache Spark <cit.> also provides a Pandas-like DDF named Pandas on Spark. In addition to these systems, we would also like to recognize some exciting new projects. Velox is a C++ vectorized database acceleration library managed by the Meta Inc. incubator <cit.>. Currently, it does not provide a DF abstraction, but still offers most of the operators shown in Figure <ref>. Photon is another C++-based vectorized query engine developed by Databricks <cit.> that enables native performance to the Apache Spark ecosystem. Unfortunately, it has yet to be released to the open-source community. Substrait is another interesting model that attempts to produce an independent description of data compute operations <cit.>. § LIMITATIONS AND FUTURE WORK currently covers about 30% of the Pandas API, and more distributed operators are being added, significantly, Window operators. Furthermore, the cost model for evaluating dataframe operator patterns has allowed us to identify areas of improvement. For example, communication operations could be improved by introducing algorithms that have lower latency costs. Additionally, in Section <ref> we saw significant time being spent on communication. These observations can be further analyzed using MPI profiler tools (eg. TAU - Tuning and Analysis Utilities, LLNL mpiP, etc.) and distributed debugging tools (eg. Arm/Linaro DDT, etc). Some of these tools are available in the Summit supercomputer, which could give an in-depth look at the communication bottlenecks. In modern CPU hardware, we can perform computation while waiting on communication results. Since an operator consists of sub-operators arranged in a DAG, we can exploit pipeline parallelism by overlapping communication and computation. Furthermore, we can also change the granularity of a computation such that it fits into CPU caches. We have made some preliminary investigations on these ideas, and we were able to see significant performance improvements for . Providing fault tolerance in an MPI-like environment is quite challenging, as it operates under the assumption that the communication channels are alive throughout the application. This means providing communication-level fault tolerance would be complicated. However, we are planning to add a checkpointing mechanism that would allow a much coarser-level fault tolerance. Load imbalance (especially with skewed datasets) could starve some processes and might reduce the overall throughput. To avoid such scenarios, we are working on a sample-based repartitioning mechanism. § CONCLUSION We recognize that today's data science communication operations could be improved by introducing algorithms that have lower latency costs. The data science community requires scalable solutions to meet its ever-growing data demand. Dataframes are at the heart of such applications, and in this paper, we discussed a cost model for evaluating the performance of distributed dataframe operator patterns introduced in our prior publication <cit.>. We also extended the execution model described in the previous work, by introducing a communication model. With these additions, we strongly believe we have presented a comprehensive execution model for distributed dataframe operators in distributed memory environments. Additionally, we presented , a reference runtime developed based on these concepts. We use the proposed model to analyze the communication and computation performance and identify bottlenecks and areas of improvement. We also showcased the importance of this work by conducting large-scale experiments on the ORNL Summit supercomputer where it showed admirable scalability in both strong and weak scaling experiments. also showed superior scalability compared to the state-of-the-art distributed dataframe systems, which further substantiates the effectiveness of the execution model presented in this paper. § ACKNOWLEDGMENTS We gratefully acknowledge the support of NSF grants 2210266 (CINES) and 1918626 (GPCE). elsarticle-num
http://arxiv.org/abs/2307.02641v1
20230705201657
Active Class Selection for Few-Shot Class-Incremental Learning
[ "Christopher McClurg", "Ali Ayub", "Harsh Tyagi", "Sarah M. Rajtmajer", "Alan R. Wagner" ]
cs.RO
[ "cs.RO", "cs.CV", "cs.LG" ]
Valley-controlled transport in graphene/ WSe_2 heterostructures under an off-resonant polarized light M. Tahir August 1, 2023 ===================================================================================================== For real-world applications, robots will need to continually learn in their environments through limited interactions with their users. Toward this, previous works in few-shot class incremental learning (FSCIL) and active class selection (ACS) have achieved promising results but were tested in constrained setups. Therefore, in this paper, we combine ideas from FSCIL and ACS to develop a novel framework that can allow an autonomous agent to continually learn new objects by asking its users to label only a few of the most informative objects in the environment. To this end, we build on a state-of-the-art (SOTA) FSCIL model and extend it with techniques from ACS literature. We term this model Few-shot Incremental Active class SeleCtiOn (FIASco). We further integrate a potential field-based navigation technique with our model to develop a complete framework that can allow an agent to process and reason on its sensory data through the FIASco model, navigate towards the most informative object in the environment, gather data about the object through its sensors and incrementally update the FIASco model. Experimental results on a simulated agent and a real robot show the significance of our approach for long-term real-world robotics applications. § INTRODUCTION A primary challenge faced by robots deployed in the real world is continual adaptation to dynamic environments. Central to this challenge is object recognition <cit.>, a task typically requiring labeled examples. In this work, we address the problem of parsimonious object labelling wherein a robot may request labels for a small number of objects about which it knows least. In recent years, several works have been directed toward the problem of Few-Shot Class Incremental Learning (FSCIL) <cit.> to develop models of incremental object learning that can learn from limited training data for each object class. The literature has made significant progress toward developing robots that can continually learn new objects from limited training data while preserving knowledge of previous objects. However, existing methods make strong assumptions about the training data that are rarely true in the real world. For example, FSCIL assumes that in each increment the robot will receive a fully labeled image dataset for the object classes in that increment, and the robot will not receive more data for these classes again <cit.>. In real world environments, however, robots will most likely encounter many unlabeled objects in their environment, and they will have to direct their learning toward a smaller subset of those unknown objects. Active learning is a subfield of machine learning that focuses on improving the learning efficiency of models by selectively seeking labels from within a large unlabeled data pool <cit.>. Related to active learning is active class selection (ACS) in which a model seeks labels for specific object classes <cit.>. ACS can allow autonomous robots operating in real-world environments to focus their learning objects about which they know least. Most ACS models, however, have been designed for batch learning, i.e., they require all the previous training data to be available when learning in an increment <cit.>. Further, both active learning and ACS techniques have previously been tested on static datasets rather than with real agents/robots <cit.>. In this paper, we combine ideas from ACS and FSCIL to develop a framework that can allow an autonomous agent roaming in its environment to continually adapt by learning about the most informative objects through interaction with its human users. Toward this, we build on a state-of-the-art (SOTA) FSCIL model and extend it with techniques from ACS literature. We term this model Few-shot Incremental Active class SeleCtiOn (FIASco). We further integrate a potential field-based navigation technique with our model to develop a complete framework that can allow an agent to process and reason about its sensory data, navigate towards the most informative object in the environment, gather the data for the object through its sensors and incrementally update the FIASco model. We perform extensive evaluations of our approach in a simulated Minecraft environment and with a real robot in a laboratory setting. The main contributions of the paper are as follows: (1) We develop a novel framework extending FSCIL techniques with ideas from ACS and integrating it with autonomous agents. (2) Our experiments on a simulated and a real autonomous agent demonstrate the effectiveness and applicability of our framework for the long-term deployment of robots in real-world environments. Our code is available at <https://github.com/chrismcclurg/FSCIL-ACS>. § BACKGROUND Class-incremental learning (CIL) considers the problem where labeled data is provided to the learner in increments of full classes. When applied to neural networks, CIL results in catastrophic forgetting, where the model forgets the previously learned classes and classification accuracy erodes <cit.>. A limitation of recent CIL methods is the reliance on storing a portion of the data from prior classes when learning new classes <cit.>. These methods, often storing high-dimensional images, are not practical in situations when the system has limited memory. To avoid storing real images, some CIL methods use a regularization loss term to prevent the weights of the model from changing drastically when learning new classes  <cit.>. Other CIL methods regenerate images of the old classes with generative models <cit.>. In a preliminary experiment, we compare the performance of a recent clustering approach (CBCL-PR) <cit.> against three popular CIL algorithms in a few-shot class-incremental learning setting: iCARL, PODNet, and DER. iCaRL <cit.> stores exemplars in memory, uses a regularization term called distillation-loss <cit.>, and Nearest Class Mean (NCM) to classify data <cit.>. PODnet <cit.> stores proxy vectors in memory, uses a spatially-based distillation-loss, and also uses an NCM classifier. DER <cit.> uses a two-stage approach that freezes previously learned representations and then augments the model with features from a fine-tuned extractor. The results of this preliminary experiment are contained in the appendix. Few-shot class-incremental learning (FSCIL) adapts the class-incremental learning problem by limiting the number of training examples per class. Specifically, the data is first divided among training and test sets such that x_i ∈ (X^train∪ X^test), y_i ∈ (y^train∪ y^test). Then the training data is divided into increments x_i^train∈ (D_0^train∪ D_1^train∪...D_n^train), y_i^train∈ (C_0 ∪ C_1 ∪...C_n) such that each increment is composed of a unique set of classes (i.e., ∀i,j∋ i ≠ j, C_i ∩ C_j = ∅). In the i-th increment, the model only trains on the corresponding training data {D_i^train, C_i}. The model is then evaluated on a test set that includes all classes seen so far (i.e., {⋃_j=1^iD_j^test, ⋃_j=1^iC_j). The size of an increment is D_0^train containing N_b of full classes. A problem setting which contains N classes per increment and k examples per class is known as N-way k-shot learning. In FSCIL, the problem is typically formatted with 100 full classes in the first increment, and then 10-way 5-shot learning for the remaining increments <cit.>. In a preliminary experiment, we compare the performance of CBCL-PR <cit.> against five other FSCIL algorithms: TOPIC, SPPR, Decoupled-DeepEMD, CEC, and FACT. TOPIC <cit.> represents knowledge with a neural gas network in order to preserve the topology of the feature space. SPPR <cit.> uses prototype learning, including random episode selection to adapt the feature representation and a dynamic relation projection between old and new classes. Decoupled-DeepEMD <cit.> decouples the training of the embedding and the classifier; the embedding is trained on the initial increment of 100 full classes, while the subsequent increments replace class-specific classifiers with new mean embeddings. CEC <cit.> trains an additional graph model to adapt prototypes of old and new classes. FACT <cit.> is the current state-of-art, which uses prototypes to limit the embedding space of old classes, reserving space for new classes. The results of this preliminary experiment are contained in the appendix. Active Class Selection (ACS) considers the problem where the learner can improve learning efficiency by requesting more data from a specific class <cit.>. In prior work, ACS was piloted to enable an artificial nose to efficiently learn to discriminate vapors <cit.>. In a batch learning setting, the learner used feedback from the previous batch to influence the class distribution among samples in the next class. A recent approach to ACS, PAL-ACS, demonstrated high performance by generating pseudo-examples, transforming an ACS problem into an active learning problem <cit.>. This study was, however, limited to synthetic data. Active incremental learning considers the problem where incremental learning and active learning are combined. In active learning, the learner may actively request labels for training data. One study assumed labels are no longer provided in the CIL setting <cit.>. Another study allowed a learner to incrementally select points for labeling from a point cloud <cit.>. A third study allowed a learner to incrementally select examples for annotation by a human expert  <cit.>. In these studies, the incremental learner selects training data to label, which defines the active learning problem. In contrast, this paper uses incremental learning to select classes to receive additional training instances, which is an active class selection problem. § MODEL DESCRIPTION Our goal is to develop a model (FIASco) that can not only learn incrementally, but can also select – from observed classes in a novel environment– classes which to receive more training instances. This problem is a modified class-incremental learning problem, whereas the next training class is determined by environmental availability and agent affinity. To learn incrementally, we ran preliminary experiments (see appendix) to identify CBCL-PR <cit.> as the most promising approach for this problem. The identified approach not only produces SOTA results on few-shot incremental learning benchmarks, but also represents object classes as clusters, which have intrinsic statistics that can be used to to select the next training class in an environment. An overview of the model is shown in Figure <ref>. In this section, we describe the components of FIASco, including incremental learning with clustering (Section <ref>), active class selection with cluster statistics (Section <ref>), and navigation using a potential field created by cluster-averaged statistics of the observed classes in the environment (Section <ref>). §.§ Incremental Learning with Clusters In each increment, the learner receives the training examples (images) for new classes. Feature vectors of the images are generated using a pre-trained convolutional neural net as a feature extractor. Clusters are created from feature vectors that are within a tolerable distance of one another, enabling discrimination between classes and consolidation of these classes into long-term memory. For more details of this clustering approach, please see the appendix or related literature <cit.>. §.§ Active Class Selection with Cluster Statistics We extend the learning approach to use feedback from cluster statistics. Specifically, the cluster space allows for measures – cluster weight, class weight, and cluster variance – to guide the selection of new samples for training. Cluster weight is the number of training examples included in an individual cluster within a class. Likewise, class weight is the number of training examples per class. Cluster variance is calculated in a recursive manner such that prior training data is not needed. As defined by Welford's method, the n-th update (n>1) of a cluster's variance is s_n^2  <cit.>: (n-1) s_n^2 - (n-2) s_n-1^2 = (x_n-x̅_n)(x_n-x̅_n-1) These internal measures give direct feedback for active class selection (ACS). Recall that previous ACS methods use results from the previous batch as feedback to specify the distribution of classes in the next batch. In incremental learning, the learner does not control the size of new batches. Therefore, class selection is instead an ordering of preferred classes: * Low Class Weight: Prioritize classes with lower class weight. The intuition for this ordering is that adding instances to a class with fewer instances will likely add useful information (new clusters), increasing overall accuracy. * Low Cluster Weight: Prioritize classes with lower average cluster weight. The intuition for this ordering is that adding instances to classes with undeveloped clusters (outliers) will be more likely to impact (shift/ add weight to) the class-specific space, increasing overall accuracy. * Low Cluster Variance: Prioritize classes with lower average cluster variance. The intuition for this ordering is that adding instances from classes with less noise will likely add valuable information with minimal overall noise. * High Cluster Variance: Prioritize classes with higher average cluster variance. The intuition for this ordering is that adding instances from classes with more uncertainty will likely provide more distinct clusters within the class. To further illustrate these measures, consider a distribution of two classes of data, as shown in Figure <ref>. Each instance of data is initially plotted in the two-dimensional vector space (left). The clustering process (middle) extracts useful cluster information, such as weight and variance. Finally, the extracted information can be cluster-averaged per each class of data (right). Which class of data should be requested next for the purpose of training? According to the low class weight metric, class A should be requested (4.0 < 7.0). According to the low cluster weight metric, class B should be requested (3.5 < 4.0). For low cluster variance, class A should be requested (0.5 < 1.3). Of course, class B should be requested for high cluster variance (1.3 > 0.5). §.§ Navigation from Active Class Selection Integrating our incremental ACS approach on an autonomous agent requires developing a method for navigation to move towards the most informative data samples. The selected method for navigation was a potential field approach, simplified from <cit.>. Figure <ref> shows a potential field created from agent observations in the simulation. Motivated to apply these methods on a real robot that can make some inference about distal objects (d ≤ d_far) and then identify objects at a closer distance (d ≤ d_close < d_far), the learner is given similar characteristics. In experiment, distances for class identification and feature extraction were set to d_far and d_close, respectively. Objects within distance d_far would be included in the learner's internal potential field, where the true class label would be known by the robot (i.e., close enough to ask a person for the true labels). For the i-th object in the potential field, an attractive or repulsive force f_i was assigned based on the order of class priority determined in ACS. The potential field is then defined by equation (<ref>), where the i-th observation is made at (x_i, y_i) and the robot position is (x_0, y_0). Objects are only learned when the robot is within the distance d_close, where an image can be taken and features extracted for training. (F_x, F_y) = (∑_i=1^n_if_i/x_i - x_0, ∑_i=1^n_if_i/y_i - y_0) A common problem with potential fields is that the agent can get stuck in a local minima. Past solutions for this local minima problem have included adding small, random perturbations or adjusting the gain of a particular contribution to a potential field <cit.>. In our simulated experiment, the number of time steps spent inside a relative location is counted. If the learner exceeds a specified count limit, it is directed back to the start position. Every time the learner returned to the start position, it is sent in a new direction (i.e., if the learner came from the North, it is randomly sent East, South, or West). In our experiment with a real robot, sensor error also presented problems. That is, not only is there possibility of getting stuck in a local minima, an undetected obstacle would also prevent movement of the robot. To mitigate the effects of sensing error, rather that use a continually-adapting potential field, the robot observed its surroundings once, then used A* path-planning <cit.> to get to the location of selected class. This navigation method has less benefit for actively selecting classes; please see the results from Section <ref> for a discussion, or the appendix for more info on the A* method. Figure <ref> shows the A* path planning from robot observations in the environment. § EXPERIMENT: FSCIL-ACS IN MINECRAFT Our first experiment is an image classification task within the Minecraft simulation environment. We aim to show that a simulated robot can use internal feedback based on what it has learned about the environment (cluster space) to more efficiently seek unknown objects in the environment. §.§ Experimental Setup Overview. A robot in Minecraft is given two minute intervals to search the environment for new visual examples of objects. The robot navigates with an internal potential field, created from objects within an observable distance (d < d_far = 15). The robot can observe visual examples of an object only when it stands over that object (d < d_close = 1). After the interval of searching, the robot processes the visual examples by updating its cluster space (FIASco) or re-training on all of the previous training data (SVM). Finally, the robot makes predictions on the test data (static subset of original dataset) and classification accuracy is recorded. The robot's affinity to different classes of items is updated using the ACS methods described in Section <ref> , which directly affects the future potential field for navigation. The experiment continues for 360 minutes. Please see the supplemental material for experiment replication notes. Baselines. Cluster-based ACS methods were compared with a batch learner using `uniform' and `redistricting' class selection. The `uniform' method randomly sets the class order so that all classes have an equal opportunity to be prioritized. The `redistricting' method uses cross-validation to determine the most volatile (changing predictions when new samples are added in the validation stage) classes to prioritize. The cluster-based ACS methods are described in Section <ref>. Note that `uniform' is also run for FIASco and that `high cluster variance' is most similar to the previous `redistricting' method without the time-consuming validation step. Environment. Minecraft was used because it offers a large number of items and user control to create maps, enabling a realistic, yet constrained spatio-temporal situation for an agent <cit.>. The experiment map (Figure <ref>, left) contained four buildings. These buildings housed four unique groups of classes, grouped by the similarity of class-averaged feature vectors (centroids). Within a building, items were randomly, uniquely assigned to one of the thirty containers (Figure <ref>, middle). These containers served as the link to real-world items. As an agent approached the location of a container, it would observe a certain type of Minecraft item. This observation was then mapped to a class of the training dataset. While in the proximity of a container, the agent could choose to learn about the class by standing directly over the container. In this case, the agent would receive a random 5-9 instances of a class for training, after which the container would be empty. The container does not restock until the next round of exploration, after the agent trains and updates its class affinity. Data. Two datasets were used for training and testing of the image classifier: CIFAR-100  <cit.> and the Grocery Store <cit.> datasets. CIFAR-100 contains 60,000 32x32 images, evenly distributed among 100 classes. The classes include various types of objects, such as “beaver” or “rocket.” The Grocery Store dataset contains 5,125 348x348 pixel images, non-uniformly distributed among 81 classes. The classes include various goods found in grocery stores, such as types of fruits, vegetables, and packages. Both datasets were modified to have a 90:10 stratified train-test split. Please see the Appendix for more information about the data selection. Implementation. The fixed feature extractor in this experiment was a Resnet-34 model pre-trained with Imagenet. The test was run with ten random seeds and the average was determined. For clustering, the distance threshold D and number of pseudo-exemplars N_P were determined by validation. For the CIFAR-100 test, the values for D and N_P were set to 17 and 5, respectively. For the Grocery Store test, the values for D and N_P were 15 and 40, respectively. For batch learning, a support vector machine with a linear kernel was used <cit.> to make test predictions given all extracted features. §.§ Experimental Results Results are shown in Figure <ref>. The metric used for comparison was average incremental accuracy. Note that the accuracy computed in this experiment is different from the preliminary study: rather than testing over only seen classes, the learner is tested over all classes in the environment. The highest performer in the CIFAR-100 test was FIASco with `low class weight' ACS (44.2%), an improvement of 3.7% over the best case of batch learning `uniform' ACS. The highest performer in the Grocery Store test was FIASco using `low class weight' ACS (63.4%), an improvement of 5.3% over the best case of batch learning `uniform' ACS. § EXPERIMENT: FSCIL-ACS WITH PEPPER In the final experiment, a Softbank Pepper robot was tasked with an image classification in an indoor environment. We aim to demonstrate that a real robot can use active class selection to more efficiently seek unknown objects (see Figure <ref>). §.§ Experimental Setup Overview. The robot is given sixty iterations to search the environment for new visual examples of objects. An iteration consists of the robot (1) relocating, (2) searching, (3) choosing an object, and (4) receiving training examples. To relocate, the robot first rotates with range sensors to define a localized map; an end location is chosen among the free space, and A* path planning is used <cit.>. To search, the robot uses a top camera, which provides up to 2560x1080 pixel resolution at 5 fps. After taking images of the surrounding area, the robot uses the YOLO algorithm <cit.> pre-trained on the Microsoft COCO dataset <cit.> for object localization. To choose an object, the robot uses centroids for (initially weaker) classification with active class selection to pick the most desirable class. To receive training examples, the robot shows the human experimenter an image of the desired class, for which the human can give the true label of the predicted class, as well as ten visual examples. After every iteration, the robot updates its cluster space of learned classes. The robot's affinity to different classes of items is updated using the ACS methods. At the end of every three iterations, the robot makes predictions on the test data and classification accuracy is recorded. Baselines. Cluster-based ACS methods (Section <ref>) were compared with a batch learner using `uniform' class selection, which randomly sets the class order so that all classes have an equal opportunity to be prioritized. Environment. This test was completed in an indoor environment, where items were purchased from a local grocery store to represent classes in the Grocery Store dataset <cit.>. Black cloths were used to cover tables and serve as a backdrop for items. Please see supplemental materials for images of the included classes. Data. The Grocery Store dataset <cit.> was used for training and testing of the image classifier, as in Section <ref>. The continually-trained image classifier was used for object recognition of the real objects in the experiment. A subset of 41 classes of the Grocery store dataset was used, comprised of items that could be primarily stored at room temperature. The dataset was modified to have a 90:10 stratified train-test split. Real items were distributed randomly by their coarse labels, such that similar items were grouped together (e.g., Red Delicious and Yellow Delicious apples). Please see the Appendix for more information about the data selection. Implementation. The fixed feature extractor in this experiment was a Resnet-34 model pre-trained with Imagenet. For clustering, the distance threshold D and number of pseudo-exemplars N_P were determined by validation. For this test, the values for D and N_P were 15 and 40, respectively. For batch learning, a support vector machine with a linear kernel was used <cit.> to make test predictions given all extracted features. §.§ Experimental Results Results are shown in Figure <ref>. [26]R0.5 < g r a p h i c s > Test prediction accuracy over iterations in indoor environment with Pepper. Note that SVM classifier is a batch learner, while FIASco does not re-use training data. The metric used for comparison was average incremental accuracy. The accuracy computed in this experiment is the same as in Section <ref>: the learner is tested over all classes in the environment. The highest performer in the test was FIASco with `high cluster variance' ACS (60.7%), an improvement of 0.4% over the best case of batch learning `uniform' ACS. While both experiments have a learner using the same measures to prioritize classes, there is a difference in the value of particular measures (e.g., high cluster variance.) This difference is likely due to the slight change in process, where the robot learner is making an initially weak prediction about the detected object classes before requesting a class (see Section <ref>). Hence, a wrong prediction about a class with a high variance may actually provide valuable insight into the divisions of nearby classes. In terms of average incremental accuracy, the FIASco model does not show as much improvement over ACS with SVM, as compared to the simulated experiment. This result is likely due to the limitations in real navigation, noted in Section <ref>. When the agent moved in simulation, the potential field was updated at every time step, calculating attractive weights for each new position of observed class. In the real environment, the robot made one full turn to observe its surroundings, then followed a path prescribed by A*. As the robot moved, new class observations were not included as options for the robot. The reason for this change was due to our particular robot being susceptible to drift error and sensor noise; we choose to reduce the sensing demand such that the robot would not get itself stuck as frequently. Note that the navigation method is kept constant in each experiment, so the comparison of ACS methods still holds true. In future studies, it would be helpful to improve the robot controller so that the more reactive navigation method could be used. § CONCLUSION To the authors’ knowledge, active class selection (ACS) has not previously been combined with few-shot incremental learning (FSCIL). This paper extends an incremental learner to use cluster statistics as feedback for actively selecting classes to learn. We have shown that the selected incremental learner (CBCL-PR) is not only state-of-the-art in a pure few-shot class incremental learning setting, but also that the cluster space is valuable to intrinsically motivate the learner to select specific classes. In both Minecraft simulation and real indoor environments, a robot that used cluster statistics for active class selection out-performed uniform batch-learning. A challenge of any (machine) learner is to gather labeled data for supervised training. We lay the groundwork for more efficient gathering and usage of labeled data, relaxing previous assumptions that have hindered the feasibility of robot learning. As opposed to previous methods in FSCIL, we do not rely on a prescribed class order, nor require training on half the dataset prior to incremental learning. These assumptions are both unrealistic and not applicable to a robot learning in a new environment. As opposed to previous methods in ACS, we incorporate more current efforts of incremental learning such that computational complexity is more favorable in the long-term (see Appendix). Future work should build on the merging of active class selection and incremental learning. The most obvious reason is that it is critical to bridge the gap between robot and agent learning. Additionally, there is opportunity to further advance the state-of-art in FSCIL-ACS. For instance, in the context of clustering, a combination of statistics could be used to guide class selection. More broadly, alternative internal measures could be used as feedback for class selection. Regardless, the advantages in combining ACS and FSCIL motivate a new direction for robot learning. §.§.§ Acknowledgments This material is based upon work supported by the Air Force Office of Scientific Research under award number FA9550-21-1-0197. collas2023_conference § APPENDIX §.§ Preliminary Study Overview. Two settings for few-shot class-incremental learning are considered, denoted traditional and pure FSCIL. In traditional FSCIL, as described in Section <ref>, the learner receives N_b-base classes of full data in the first session (t = 1), then incrementally learns on the remaining data using N-classes per session with k-examples per class (i.e., N-way, k-shot) on subsequent sessions (t > 1). The primary difficulties for learning in this setting include catastrophic forgetting and over-fitting due to class imbalance. In pure FSCIL, as introduced here, the learner receives no base classes of full data, but rather incrementally trains with N-way k-shot learning from the first session (t ≥ 1). The primary challenges of this setting include catastrophic forgetting and learning without a large portion of the dataset. Baselines. CBCL-PR was compared with nine other methods: CBCL, iCarL, PODNet, DER, TOPIC, SPPR, Decoupled-DeepEMD, CEC, and FACT. The methods formulated for class-incremental learning (iCarL, PodNet, and DER) were adapted from previous works <cit.> to limit the number of shots. FACT and CEC were re-run from original code for the pure FSCIL setting. Note that TOPIC and Decoupled-DeepEMD did not have full code available for reproduction, while SPPR performed significantly worse with any reduction in N_b base classes, so these methods are excluded from pure FSCIL comparison. Data. The Caltech-UCSD Birds 200 (CUB-200) image dataset was used for a preliminary study <cit.>. CUB-200 contains 11,788 images, uniformly distributed over 200 classes. The classes are different species of birds. Implementation. All methods used Resnet-18 pre-trained with Imagenet as a backbone. The preliminary study used ten random seeds. As in previous work <cit.>, traditional FSCIL incorporated 100 base classes and 10-way 5-shot incremental learning. The first session (t = 1) trained for 50 epochs with an initial learning rate of 0.1, decreased to 0.01 and 0.001 at 30 and 40 epochs, respectively. Subsequent sessions (t > 1) were trained with a learning rate of 0.01 for 100 epochs. The mini-batch size was 128. For pure FSCIL, no base classes were used, allowing 10-way 5-shot incremental learning from the first session. The learning rate was 0.01 for 100 epochs for all sessions (t ≥ 1). The mini-batch size was 64. Results. The left plot of Figure <ref> shows the traditional FSCIL setting. In this setting, the usual metric for comparison is performance decay (Δ y), a difference between first and last session incremental accuracy. This metric is consistent with the goal of few-shot class-incremental learning to prevent catastrophic forgetting; however, the results are not entirely clear when the lowest performance decay (best case) has a lower incremental accuracy (worst case) over a majority of the test. Attempts have been made to minimize this ambiguity by re-using the first session training or selecting hyper-parameters such that the first session has similar accuracy to baseline methods <cit.>. However, hyperparameters and full model architecture are not always shared, such that a major disadvantage of the setting is reproducibility <cit.>; moreover, it is unrealistic for a robot to train on half of the dataset before entering a new environment. Regardless, in traditional FSCIL, the best performer in performance decay is CBCL (-17.6%), an improvement of 2.4% over FACT. The newer CBCL-PR ranks third in terms of performance decay (-22.2%). It should be noted that other works have addressed more naturalistic learning paradigms, as opposed to the traditional FSCIL setting. For example Ren et. al define a new learning setting that is not based on episodes of training and testing but rather online, continual learning <cit.>. The right plot of Figure <ref> shows the pure FSCIL setting. The results are less dependent on the accuracy of the first session, which makes for a fairer overall comparison; furthermore, this setting is more realistic for robots learning in unknown environments. The primary metric for comparison in this setting was average incremental accuracy (y), which naturally considers performance decay (Δ y) along with the rate of decay. The best performer in this setting was CBCL-PR (48.4%), an improvement of 2.3% over CEC. §.§ CBCL-PR Algorithm Details CBCL-PR is an updated version of CBCL <cit.> and it is composed of the following primary components: fixed feature extractor, agg-Var clustering, and the generation of pseudo-exemplars for test prediction. In each increment, the learner receives the training examples (images) for new classes. Feature vectors of the images are generated using a pre-trained CNN feature extractor. With the exception of the preliminary study, the feature extractor was a Resnet-34 model pre-trained with ImageNet. The learner applies Agg-Var clustering on the feature vectors of new classes. Agg-Var clustering, inspired by the concept learning models of the hippocampus and the neocortex <cit.>, enables the learner to discriminate between classes and consolidate these classes into long-term memory. Within the cluster space, each new class is initialized by creating a centroid of a new cluster using the first feature vector in the training set. Next, each additional feature vector x_i^j (i.e., i-th image in class j) is compared to all the existing centroids for class j. If the Euclidean distance between x_i^j and the closest centroid is greater than a pre-defined distance threshold D, a new centroid is created for class j and equated to x_i^j. If the distance is less than D, the closest centroid is updated with a weighted mean: the n-th update (n>1) is calculated by equation (<ref>) below: n x̅_n = x_n + (n-1)x̅_n-1 Note that prior training data is not needed to calculate a new centroid, as the old centroid x̅_n-1 and new feature vector x_n=x_i^j are sufficient. This process results in a collection of centroids for the class j, C^j = {c_1^j, ..., c_N_j^j}, where N_j is the number of centroids for class j. In the update of the cluster space, covariance matrices of new clusters are recorded prior to the discarding of training data. These covariance matrices are used to generate a Gaussian distribution of pseudo-exemplars centered on their respective centroid. A linear SVM[For slightly higher accuracy, a shallow neural net was used in the preliminary study. Either classifier can be used.] is trained using pseudo-exemplars of the old classes and feature vectors of the new classes. During testing, feature vectors of the test images are generated using the pre-trained CNN feature extractor and passed through the linear SVM to classify test images based on their feature vectors. This process is shown in Figure <ref>. §.§ A* Algorithm Details In the final experiment, the FIASco algorithm was demonstrated on a Pepper robot navigating an indoor environment. A* path planning was used in order to get from point A (current location) to point B (goal location). The A* algorithm <cit.> is a common search method that incrementally extends the path until a goal state is reached (or until the maximum number of iterations have been attempted, i.e., failure). Specifically, the agent extends the path with the next node n minimizing the cost function (equation <ref>), where f(n) is the total cost, h(n) is the path distance from start, and g(n) is the estimated path distance to goal. f(n) = h(n) + g(n) In order to use this algorithm, the robot first used range sensors to compute a two-dimensional grid map. With an RGB camera, the robot localized objects and placed them on this two-dimensional grid map. Based on which object was most desirable, a goal location was selected. The A* algorithm was used to determine a path, given current and goal locations, and the two-dimensional map of the relative surroundings. §.§ Computational Cost Details The batch learner (SVM) and clustering approach (FIASco) both rely on a support vector machine with linear kernel to make test predictions. The data used to train the classifier is the same data type (thus, same dimension), so a comparison of computational complexity depends solely on the number of data points. For the batch learner, this collection of data includes every training instance the learner has collected to a given iteration. Conversely, the clustering approach of FIASco uses incoming data to update the centroids of the cluster space, for which the centroids (fewer than the total training instances) are the data points; however, the pseudo-exemplars should also be considered. Fortunately, the number of pseudo-exemplars are fixed per class and do not grow without bound. These observations can be seen in a plot of training time versus run time, shown in Figure <ref>. With the CIFAR-100 and Grocery store datasets, the FIASco only uses 5 and 40 pseudo-exemplars per class, respectively (see Section <ref>). Additionally, the agent is permitted to explore longer in the CIFAR-100 environment, since the dataset is much larger. These factors explain the trend of computational cost. Initially, the difference between training instances and centroids is minimal, as the agent sees many new objects, creating many new centroids for a majority of incoming data. Furthermore, as the agent has not explored much of the environment, the total number of training points is small, as compared to the total number of pseudo-exemplars. Of course, as time progresses, the impact of the pseudo-exemplars is reduced relative to the total number of data points. Here, clustering would see much benefit in the long term. §.§ Tabular Results This section includes the tabular results from the simulation (Section <ref>) and real-world (Section <ref>) experiments. §.§ Potential Field Details Regarding the forces used in the potential field, ACS methods were used to prioritize the order the classes. Then, the classes were divide by quartile. The top 25 percent of classes received the highest magnitude of attraction, the next 25 percent received the next highest attraction and so on. Table <ref> describes the different splits attempted. The actual splits used in the simulation experiment were those described as mod 1. Note that - and + reflect attraction and repulsion, respectively, while the integer that follows reflects magnitude. §.§ Dataset Notes Reason for datasets. The CIFAR-100 <cit.> dataset was chosen as a common benchmark in continual learning tasks <cit.>. The Grocery Store dataset <cit.> represents an ecologically valid dataset that is very close to a real-world dataset. The authors specify the data was collected from 18 different grocery stores with realistic features such as misplaced objects, varying distances, angles, lighting conditions, etc. Nature of training data. In both experiments, the data was modified to have a 90:10 stratified train-test split. The test data (10 percent) was used for the evaluation of the classifier at every iteration. In the simulation experiment, the learner selects from Minecraft objects within an observable distance. Each Minecraft object has a 1:1 mapping to classes in the offline dataset, either CIFAR-100 or Grocery Store. Thus, when the agent is near n Minecraft objects, it processes as being near n specific classes and selects a class. The learner receives training instances from the offline dataset corresponding to the selected class. In the real-world experiment, the learner detects real objects using an RGB camera, predicts the classes for which the objects belong with its classifier, and then selects from the predicted classes to train. The learner receives training instances from the offline dataset corresponding to the real object requested (from pointing). When the learner receives training instances from the offline dataset, it either stores the training features (SVM learner) or uses the training features to update the cluster space and ACS methods (FIASco). Future work. Evaluating these methods in a more cluttered or less structured environment could also be an interesting problem. Testing on such cluttered data for ACS has not been explored in most prior works <cit.>. This application might also require object segmentation or object detection, which is currently out of the scope of this work. We do note that grocery stores are highly structured environments in which products are naturally categorized and organized in a logical manner. Moreover, items are placed on the shelves in a reasonably uncluttered manner.
http://arxiv.org/abs/2307.01076v1
20230703145502
Analyzing Multiple-Choice Reading and Listening Comprehension Tests
[ "Vatsal Raina", "Adian Liusie", "Mark Gales" ]
cs.CL
[ "cs.CL" ]
Supervised Manifold Learning via Random Forest Geometry-Preserving Proximities* Jake S. Rhodes Department of Statistics Brigham Young University Provo, Utah, USA rhodes@stat.byu.edu August 1, 2023 =============================================================================================================== Multiple-choice reading and listening comprehension tests are an important part of language assessment. Content creators for standard educational tests need to carefully curate questions that assess the comprehension abilities of candidates taking the tests. However, recent work has shown that a large number of questions in general multiple-choice reading comprehension datasets can be answered without comprehension, by leveraging world knowledge instead. This work investigates how much of a contextual passage needs to be read in multiple-choice reading based on conversation transcriptions and listening comprehension tests to be able to work out the correct answer. We find that automated reading comprehension systems can perform significantly better than random with partial or even no access to the context passage. These findings offer an approach for content creators to automatically capture the trade-off between comprehension and world knowledge required for their proposed questions. Index Terms: machine reading comprehension, listening comprehension, multiple-choice, automatic speech recognition, world knowledge § INTRODUCTION Multiple-choice reading and listening comprehension tests serve as essential tools for evaluating language proficiency in educational settings <cit.>. In particular, multiple-choice questions permit fast and automated objective assessment of candidates' abilities. The creation of these standardized tests necessitates the careful selection of questions that accurately assess candidates' comprehension abilities. It is of interest for content creators to develop a framework to categorize the quality of questions used in assessment across several criteria such as complexity and diversity <cit.>. However, recent work <cit.> has identified an issue within general multiple-choice reading comprehension datasets sourced from real tests — many questions can be answered correctly without language learners truly comprehending the passage, merely by relying on prior world knowledge. This work builds upon the concept of world knowledge in reading comprehension and aims to explore the extent to which contextual passages must be read/heard in multiple-choice reading/listening tests based on conversation transcriptions and listening comprehension assessments to deduce the correct answer. For example, a candidate may be able to deduce the correct answer to a large number of the comprehension questions by only reading the first sentence. Typically language learners may not understand the whole context and only partially comprehend the sentences. Figure <ref> demonstrates three multiple-choice questions with varying degrees of required comprehension. Full comprehension, when the whole passage must be read in order to determine the correct answer. Partial comprehension, when the correct answer can be deduced from reading only a small part of the context. Finally, zero comprehension in the extreme case where the correct answer can be deduced without reading the context at all and by using world knowledge instead. For instance, in the zero comprehension example in Figure <ref>, without any need to read the context it is obvious that the answer is sick children as the question asks about charities. Information about the extent of comprehension required in reading and listening tests can act as a core component in the question assessment framework <cit.>. The degree of comprehension required can vary across the nature of the comprehension dataset. In this work, we consider a range of publicly available datasets that are very different in nature including commonsense-based reasoning, logical reasoning and multi-turn dialogue, speech transcriptions. We make the following contributions in this work: * Portability of world knowledge and partial comprehension systems from standard multiple-choice reading comprehension to dialogue and speech. * A thorough investigation of the degree of partial comprehension from zero comprehension (world knowledge) to full comprehension. We emphasize the need for content creators to carefully and explicitly consider the extent of comprehension required for the questions they generate in order to better capture how language learners may interact with the deployed questions in tests. § RELATED WORK <cit.> indicates world knowledge is prevalent in several standard multiple-choice reading comprehension systems, reinforcing whether machine reading comprehension systems fully leverage the context for the desired comprehension task <cit.>. <cit.> further introduces two performance metrics, effective number of options and mutual information of the context, to assess the extent to which world knowledge is used in these reading comprehension systems. We extend the work on world knowledge to investigate the spectrum between zero comprehension to full comprehension of real multiple-choice comprehension questions for text-based, dialogue-based and speech-based contexts. Previous work investigated automated approaches to assess the quality of comprehension questions. <cit.> present a framework to assess the quality of generated multiple-choice questions for comprehension. Four main qualities are identified: grammatical fluidity, answerability, diversity and complexity. Our work on assessing the extent to which the context needs to be read acts as an extension to this framework to capture the comprehensibility of the generated questions. Due to the lack of appropriately annotated speech corpora, several works investigate porting text-based systems for listening comprehension tasks. <cit.> explores applying a text-based question answering system on the TOEFL listening comprehension multiple-choice test from <cit.>. <cit.> further investigates the transfer learning style approach for extractive comprehension from SQuAD 2.0 <cit.> to a proprietary spoken question answering task, with a particular focus on the impact of automatic speech recognition (ASR) errors. Our approach ports systems from a multiple-choice reading comprehension task to a multiple-choice listening comprehension task to identify the extent to which comprehension of the context is required. § MULTIPLE-CHOICE COMPREHENSION §.§ Task Multiple-choice comprehension is a common assessment technique to assess the comprehension abilities of candidates in standardized tests <cit.>. Given a context passage, C and a question, Q, the correct answer must be deduced from a discrete set of N answer options, {O}. Hence, it is required to deduce the correct answer by comprehending the question and using the context passage as the information source to identify which answer option is the most suitable. §.§ Machine comprehension Machine comprehension performs the comprehension task using automated systems. Machine reading and listening comprehension for multiple-choice tests is a well researched area with state-of-the-art systems <cit.> competing and out-performing humans on public benchmarks <cit.>. In this work, the machine comprehension system's architecture replicates the standard multiple-choice machine reading comprehension systems from <cit.> and depicted in Figure <ref>. Each option is separately encoded with the question and the context to generate a score. A softmax layer converts the scores associated with each option into a probability distribution where at inference time the predicted answer is taken to be the option with the greatest probability. The parameters of the core transformer <cit.> encoder and the linear layer are shared across all options. Hence, there is no requirement for the number of options at training and inference time to match. §.§ World knowledge It is expected that information must be used from both the context passage and the question to determine the correct answer. If the answer can be deduced without the context, it suggests `world knowledge' <cit.> is sufficient to answer the question. We train a context-free system where the context is omitted to determine the extent to which world knowledge can be leveraged for comprehension. Table <ref> summarizes the main differences between the standard and context-free systems where [CLS] and [SEP] denote classification and separation tokens respectively. §.§ Partial context Language learners often can shortcut reading the whole context passage in comprehension tasks and still correctly answer the question. Hence, we devise a simple approach to investigate the extent to which a context must be comprehended in order to determine the correct answer to standard multiple-choice questions. A standard system (see Table <ref>) trained with the full context is taken and applied at inference time to questions with only partial access to the context. After applying tokenization of the context, only τ% of the context tokens are retained and input to the standard system. τ can be varied to determine how much of the context is necessary for comprehension. § EXPERIMENTS §.§ Data Several multiple-choice reading/listening comprehension datasets are used in this work including: RACE++ <cit.>, ReClor <cit.>, COSMOSQA <cit.>, DREAM <cit.> and IBM-Debater <cit.>. RACE++ is a dataset of English reading comprehension questions for Chinese high school students. The questions are collected at three levels: middle school, high school and college level, corresponding to increasing levels of complexity. COSMOSQA is a large scale commonsense-based reading comprehension dataset with four options per question. For this work, 2,985 examples from the development set is used. ReClor is a logical reasoning dataset at a graduate student level with four options per question. This is a challenging dataset as graduate students achieve an accuracy of 63%. 500 examples from the development split are used for this work (the test set is hidden). DREAM is a multiple-choice (three options) reading comprehension dataset that focuses on dialogue understanding. These dialogue are multi-turn and multi-party. It contains 10,197 questions and 6,444 dialogues, which were collected from English-as-a-foreign-language examinations. This work uses the 2,041 questions from the test split. The context is constructed by concatenating all dialogues into a single text. IBM-Debater consists of 200 spontaneous speeches arguing for or against 50 controversial topics. The dataset is structured to form a multiple-choice listening comprehension task by formulating each speech as a question that is aimed at confirming or rejecting the argument in a speech. Hence, each question has a binary class label with the transcribed speech acting as the context. The transcriptions are available as both manual and automatic speech recognition transcriptions. §.§ Training details and hyperparameters Two systems are trained on the large RACE++ training dataset (see Table <ref>): 1. A standard multiple-choice reading comprehension system with access to the context; 2. A context-free system without access to the context. Both systems are deep ensembles of 3 models that specifically use the large [Model configuration at: <https://huggingface.co/google/electra-large discriminator/blob/main/config.json>] ELECTRA <cit.> pre-trained language model in the form of the multiple-choice machine comprehension architecture of Figure <ref>. Each model has 340M parameters. Grid search was performed for hyperparameter tuning of the standard system with the initial setting of the hyperparameter values by the systems from <cit.>. Apart from the default values used for various hyperparameters, the grid search was performed for the maximum number of epochs ∈{2,5,10}; learning rate ∈{2e-7, 2e-6, 2e-5}; batch size ∈{2,4}. Training was performed for 2 epochs at a learning rate of 2e-6 with a batch size of 4 and inputs truncated to 512 tokens at both training and inference time. Cross-entropy loss was used at training time with models built using NVIDIA A100 graphical processing units with training time under 4 hours per model. The context-free system had its hyperparameters selected to be identical to the standard system. §.§ Assessment Accuracy is used as the standard performance metric for inference on all datasets. The evaluation process aims to assess two aspects of the multiple-choice questions in each dataset: 1. the ability to use world knowledge in order to determine the correct answer and consequently the effective number of options per question; 2. the extent to which the context must be read/listened to determine the correct answer. The former is assessed by comparing the accuracy of a context-free comprehension system against a standard multiple-choice comprehension system while the latter is assessed by varying the amount of context available to a standard multiple-choice reading comprehension system at test time. § RESULTS Multiple-choice questions are assessed for comprehensibility in terms of both world knowledge and partial access to the context. §.§ World knowledge Table <ref> presents the prevalence of world knowledge across a range of reading and listening comprehension datasets. As both the standard and the context-free systems are trained on the RACE++ dataset, Table <ref> further presents the portability of the systems to different forms of reading/listening comprehension. As in <cit.>, the reading comprehension datasets of RACE++, COSMOSQA and ReClor observe significant presence of world knowledge. In particular, the context-free system on RACE++ achieves an accuracy of 59.1% despite having no access to the contextual passage that is more than double the accuracy of a random baseline. The ported context-free system also out-performs the 25% random baseline for commonsense reasoning and logical reasoning for COSMOSQA and ReClor respectively. Note, ReClor is a more challenging reading comprehension dataset than COSMOSQA and RACE++ <cit.>, confirmed by the standard RACE++ trained system getting an accuracy of 73.2% on COSMOSQA but 48.8% on ReClor. Systems trained directly on COSMOSQA, ReClor observe a similar pattern <cit.>. From Table <ref>, both the context-free and the standard systems port across well to dialogues in the DREAM dataset. As before, the DREAM dataset demonstrates the presence of world knowledge as the context-free system surpasses the random baseline of 33% to achieve 46%. It is further interesting to observe the standard system ported from RACE++ gets an accuracy of 86%, which approaches the state-of-the-art performance of standard systems trained on DREAM <cit.>. However, the context-free system performs randomly on the speech transcriptions from the IBM-Debater dataset. This is an expected result as the speeches are reformulated into listening comprehension questions by posing whether the speech is pro or con a specific controversial topic (see Section <ref>). As the speeches are balanced for each topic, it is impossible to use world knowledge for a context-free system to deduce the argument in the speech without listening to it. The standard system, with access to the speech transcription, gets an accuracy of 65% with manual transcriptions and 62% with ASR transcriptions, comparable to <cit.>. Hence, the presence of ASR errors leads to a small drop in performance for binary classification. §.§ Partial information access This section investigates to what extent the context passage must be read or listened. Figure <ref> presents the accuracy with partial access to the context, varying from zero to full access, for text, dialogue and speech-based comprehension questions. All results are presented using the standard system trained on RACE++. Hence, the accuracy with 0% access to the context on the plots differs in performance from the context-free system applied to the datasets from Table <ref> - the context-free system's performance can expect to be an upperbound of performance with world knowledge as the system has explicitly been trained to try and deduce the correct answer without using the context. It is notable from Figure <ref> that both the text-based and dialogue based reading comprehension datasets all start above the random line while the speech-based listening comprehension dataset begins at random accuracy, agreeing with Table <ref>. Figure <ref> depicts that the text-based reading comprehension datasets increase linearly (approximately) with increasing access to the context passage. Such a linear relationship indicates that information required to deduce the correct answer is evenly distributed throughout the context passage. A similar behaviour is observed with DREAM, though the slow start indicates that information may be more disjoint in order to deduce the correct answer as emphasized in the original release of the DREAM dataset <cit.>. In contrast, a very different shape is observed for the speech transcriptions: there is a sharp increase on the IBM-Debater dataset with increased access to the speech and then the performance plateaus. Such a shape suggests the information is front-heavy where it is possible to deduce the side of the argument made in a speech using the first sentence. Table <ref> further investigates the extent to which information is unevenly distributed in the IBM-Debater speeches. From Figure <ref>, 20% is used as an appropriate operating point to compare the performance with access to only the beginning extract of the context against the end and random extracts. For both the manual and the ASR transcriptions the performance is the highest for the beginning 20% and lowest for the end 20%, confirming the information to deduce the correct answer is concentrated at the beginning of the context. Future work should consider evaluating how performance varies with access to the easiest vs the most difficult sentences as the easiest sections mimic the parts of the context a language learner understands [Initial experiments with sentence complexity based on standard vocabulary levels did not observe a statistically significant difference between the easiest and most difficult 20% according to text readability.]. Content creators are encouraged to plot similar characteristic graphs for newly proposed questions to gauge the degree of comprehension required by language learners. § CONCLUSIONS This work highlights the trade-off between contextual comprehension and world knowledge in multiple-choice reading and listening comprehension tests. We found that automated reading comprehension systems perform significantly better than random, even with limited access to the context passage. These findings provide content creators with an approach to capture the balance between comprehension and world knowledge in their questions. We further investigated to what extent a context needs to be read before the correct answer can be deduced, finding that it is possible to answer some questions across several reading/listening comprehension datasets with only access to a fraction of the context. Overall, our findings guide content creators in constructing more valid and reliable assessments, ensuring accurate evaluation of language proficiency. § LIMITATIONS A limitation for the IBM-Debater dataset is that the contexts have been truncated to 512 tokens prior to any experiments despite the average length being approximately 1000 tokens to use the standard pretrained language model finetuned on RACE++. IEEEtran
http://arxiv.org/abs/2307.01731v1
20230704135903
Finding LoTSS of hosts for GRBs: a search for galaxy - gamma-ray burst coincidences at low frequencies with LOFAR
[ "R. A. J. Eyles-Ferris", "R. L. C. Starling" ]
astro-ph.HE
[ "astro-ph.HE", "astro-ph.GA" ]
firstpage–lastpage Discovery of in TMC-1 Based on observations carried out with the Yebes 40m telescope (projects 19A003, 20A014, 20D15, and 21A011) and the Institut de Radioastronomie Millimétrique (IRAM) 30m telescope. The 40m radiotelescope at Yebes Observatory is operated by the Spanish Geographic Institute (IGN, Ministerio de Transportes, Movilidad y Agenda Urbana). IRAM is supported by INSU/CNRS (France), MPG (Germany) and IGN (Spain). W. G. D. P. Silva 1 J. Cernicharo 2 S. Schlemmer 1 N. Marcelino 3,4 J.-C. Loison 5 M. Agúndez 2 D. Gupta 1 V. Wakelam 6 S. Thorwirth 1 C. Cabezas 2 B. Tercero 3,4 J. L. Doménech 7 R. Fuentetaja 2 W.-J. Kim 1 P. de Vicente 3 O. Asvany 1 Received X, 2023; accepted X, 2023 =============================================================================================================================================================================================================================================================================================================================================================================================================================================== The LOFAR Two-Metre Sky Survey (LoTSS) is an invaluable new tool for investigating the properties of sources at low frequencies and has helped to open up the study of galaxy populations in this regime. In this work, we perform a search for host galaxies of gamma-ray bursts (GRBs). We use the relative density of sources in Data Release 2 of LoTSS to define the probability of a chance alignment, , and find 18 sources corresponding to 17 GRBs which meet a <1% criterion. We examine the nature and properties of these radio sources using both LOFAR data and broadband information, including their radio spectral index, star formation rate estimates and any contributions from active galactic nucleus emission. Assuming the radio emission is dominated by star formation, we find that our sources show high star formation rates (10^1–10^3) compared with both a field galaxy sample and a sample of core-collapse supernova hosts, and the majority of putative hosts are consistent with ultraluminous infrared galaxy (ULIRG) classifications. As a result of our analyses, we define a final sample of eight likely GRB host candidates in the LoTSS DR2 survey. gamma-ray bursts – radio continuum: galaxies – surveys § INTRODUCTION The properties of gamma-ray bursts (GRBs) are the result of their progenitors and therefore the environment in which they have evolved. The majority of GRBs discovered to date are long GRBs, bursts in which 90% of the isotropic equivalent energy is detected over a period longer than two seconds. These GRBs are driven by the collapse of massive stars and hence are strongly connected to star formation. As such, while GRBs are cosmological and are found across redshifts up to 9.4 <cit.>, the peak of their redshift distribution is also commensurate with the peak of star formation <cit.>. However, despite this strong connection, long GRBs are not unbiased tracers of star formation <cit.>. Long GRBs have been found to be hosted in a wide variety of galaxy types but there is evidence of a bias towards galaxies with lower masses, high specific star formation and low metallicities <cit.>. These environments are ideal for forming collapsars, the progenitors of long GRBs, and other studies have suggested that higher metallicities actually suppress collapsar formation <cit.>. Short GRBs, where 90% of the isotropic equivalent energy is detected over a period shorter than two seconds, have different progenitor systems and are driven by compact binary mergers. They therefore display a greater variety in their host galaxy properties and do not require active star formation. There is also a minority of short GRBs that appear to lack hosts - most likely due to natal kicks ejecting them from their original hosts and resulting in offsets of tens of kiloparsecs <cit.>. It is therefore clear that a galaxy's star formation rate (SFR) has a significant impact on its likelihood to host long GRBs in particular. There are several methods to measure SFR including at UV and optical wavelengths. However, these methods are significantly affected by dust extinction in the target galaxy. This extinction, and its effects on SFR measurements, is often hard to constrain and leads to significant biases. Star formation also drives radio emission, however, which is significantly less affected by dust extinction. This therefore offers an opportunity for dust unbiased measurements of long GRB hosts. There are factors to consider when using radio data in this way, however. While radio emission is a good tracker of star formation, a significant proportion of extragalactic radio sources are active galactic nuclei (AGN). AGN can also be a significant contaminant in sources that are also highly starforming. It is therefore important to minimise AGN contamination to accurately measure the properties of any galaxies identified. There is also the possibility of lingering emission from any GRB afterglows further contaminating the sample. This has occurred in other studies <cit.> although we do not expect it to be a contributor in here due to the low frequency of the regime probed and the historic nature of the GRBs investigated. We also need to consider other biases inherent to such a survey. The luminosity of the radio emission from star formation is strongly dependent on the SFR. In a blind search for such sources, therefore, we may expect to only detect galaxies with very elevated SFRs. While we expect long GRB hosts to have high SFRs, it could be difficult to determine whether the hosts detected via radio are representative of the entire population or comprise an outlying subsample of extreme starforming galaxies. Alternatively, a large proportion of long GRB hosts with lower, albeit still high, SFRs could remain undetected and the search would be inherently incomplete. Due to star formation no longer being necessary, compared to long GRB hosts, short GRB hosts are more likely to be radio-quiet despite their lower mean redshift <cit.>. This means a radio search is also unlikely to detect them and lead to greater incompleteness across the broader GRB population. Radio searches for GRB hosts have proven to be productive in past studies, however <cit.>. At GHz frequencies, these galaxies typically have measured SFRs much higher than those obtained through UV/optical methods <cit.>. Long GRBs are also commonly hosted in powerful luminous infrared galaxies (LIRGs) and ultra-luminous infrared galaxies (ULIRGs) and the highly dusty nature of these galaxies means radio is the only way to recover obscured star formation. In these cases, SFRs of hundreds to over a thousand have been recovered <cit.>. The fraction of GRB hosts with such dust obscuration is still unclear <cit.> and radio surveys offer a way to constrain this further. While some of the differences between UV/optical SFRs and radio SFRs have been attributed to afterglow or AGN contamination, it is clear radio is a valuable tool to investigate these galaxies. A new window into the low frequency behaviour of galaxies has recently opened with the second Data Release of the LOFAR Two-metre Sky Survey <cit.>. LoTSS DR2 covers 6335 square degrees in two regions centred on 12h45m00s +4430'00" (RA-13 region) and 1h00m00s +2800'00" (RA-1 region), approximately 27% of the northern hemisphere. The frequency range is significantly lower than most radio surveys at 120–168 MHz subdivided into three 16 MHz wide bands, a regime almost entirely unprobed in studies of GRB host galaxies. The survey achieves a resolution of 6" and RMS limits of 74 μJy and 106 μJy in the RA-13 and RA-1 regions respectively. This has allowed 4.4 million sources to be detected, and in this paper, we investigate these catalogues to identify associations between LoTSS sources and GRBs detected with the X-ray Telescope <cit.> on board the Neil Gehrels Swift Observatory <cit.>. These GRBs are extremely well localised to within ≤ 2 arcsec 90% of the time <cit.>, ideal for crossmatching to LoTSS DR2. In Section <ref>, we present our crossmatching method and initial associations. We further discuss our method in Section <ref>, focussing on any selection biases and its completeness. The properties of our matches are investigated and presented in Section <ref> and these are combined with our discussion of our methodology to produce a final sample of GRB associations in <ref>. Finally, we summarise in Section <ref>. Throughout this paper, we give errors to 1-σ and adopt a flat ΛCDM cosmology with H_0 = 71 km s^-1 Mpc^-1, Ω_m = 0.27 and Ω_Λ = 0.73. § ASSOCIATING GRBS AND POSSIBLE HOSTS §.§ Crossmatching Our sample of GRBs was taken from the live Swift-XRT GRB Catalogue[<https://www.swift.ac.uk/xrt_live_cat/>] <cit.> hosted by the UK Swift Science Data Centre (UKSSDC) up to 15 July 2022. This consisted of 1489 GRBs and we found 280 of them (253 long GRBs and 27 short GRBs) were located within the LoTSS DR2 footprint, defining the footprint as the area within one degree of at least one LoTSS DR2 source, as shown in Figure <ref>. We note that two sources apparently within the LoTSS footprint are excluded by this criterion but this is most likely due to blanked regions within the footprint[see <https://lofar-surveys.org/dr2_release.html>]. We matched this sample to the table <cit.>, selecting all LoTSS sources within 10 times the 90% XRT error region for each GRB. We then calculated the probability of chance alignment, , following the method of <cit.>. In that work, GRBs were matched to possible host galaxies in the Sloan Digital Sky Survey (SDSS) by examining the correlation of flux and source density in a sample of the SDSS. The  of possible matches was then evaluated by comparison with this overall population. We generated 15,000 random points within the footprint of LoTSS DR2 and measured the separations from these points to the nearest sources above various total flux thresholds. We classified these sources by cross-matching to the NASA/IPAC Extragalactic Database[<https://ned.ipac.caltech.edu/>] (NED) and the Set of Identifications, Measurements and Bibliography for Astronomical Data <cit.> database[<https://simbad.u-strasbg.fr/simbad/>], assuming the LoTSS DR2 source to correspond to the closest object within 20". These cross matches were then used to select only galaxies from the sample and we found that the relationship between the separations from the random points, δ x, and the flux of the galaxies, F, was reasonably well fit as log(δ x) = αlog(F) + c where α = 0.417±0.002 and c = 2.153±0.005. We also took a comparison sample of field galaxies from this population, selecting sources with SIMBAD galaxy counterparts with sufficiently low separations for their <1%. We further subdivided this into active and inactive galaxies using their SIMBAD classifications. We refer to these samples later in this paper as ActiveField and InactiveField, respectively. We used this fit to define the percentile contours of the distribution and we plot the 1st, 5th, 10th, 25th and 50th percentiles in Figure <ref>. For each source matched to a GRB,  was therefore determined by which percentile contour the matched source lay on. We calculated these for all of our matched sources and selected a threshold of <1%. We found that twelve long GRBs and one short GRB had at least one match meeting this threshold, while a further four long GRBs had matches consistent with the threshold when accounting for errors. We plot the flux density and separation from the GRBs of our crossmatched sources in Figure <ref> and summarise them in Table <ref>. We also extracted images of each LoTSS source and the position of their potential match from the full LoTSS mosaics and present them in Figure <ref>. All of the LoTSS sources are most likely galaxies, with the majority having known counterparts in SIMBAD. Two GRBs, 050509B and 081025, were found to have multiple matches. We discuss these sources further below. The 17 matched GRBs represent 6.0% of the 280 GRBs within the footprint. For the two GRB types, this breaks down into 16/253 or 6.3% of long GRBs and 1/27 or 3.8% of short GRBs. This method can lead to crossmatches at large angular separations, up to ∼35". This is primarily due to the lower source density at higher flux density limits and therefore greater separations are allowed. Such large separations are possible. For instance, the formation of the compact binary progenitors of short GRBs induces a significant kick velocity such that short GRBs can occur a significant distance from their original hosts. For both short and long GRBs, there are also the possibilities that the GRBs are occurring in the outskirts of extended galaxies or that they are located at low redshifts where a large angular separation does not translate to a large physical separation. This latter case would also account for the high brightness of some putative hosts but is also inconsistent with the observed gamma-ray fluence of several of the bursts. Finally, there is also the possibility of the positional accuracy playing a significant role. Alternatively, many of the LoTSS sources are extended, consistent with their galactic nature, and as such the separations may also be dependent on the angular size of the host. We calculated the extent of each LoTSS source towards the centre of the associated GRB's XRT error region assuming the source to be Gaussian in nature and include this in Table <ref>. Note that we used the major and minor axes of the sources after deconvolution with the beam, while the images in Figure <ref> are derived from the raw mosaics. We also derived the normalised separation, i.e. the separation divided by the extent. This has previously been investigated at optical wavelengths by <cit.>, who found that for long GRBs, the normalised separation was typically ≲one but could range up to ∼nine in extreme cases. Our normalised separations are generally somewhat larger than this but are not incompatible within errors. However, there are a number of sources with significantly larger separations, further reducing the likelihood of these being accurate crossmatches. §.§.§ GRBs with multiple matches For our only short GRB, GRB 050509B, two separate LoTSS sources were consistent with being matches according to our  criterion. GRB 050509B is a relatively well studied short GRB and it is likely to have occurred within a cluster <cit.>. Multiple matches are therefore not surprising. While the afterglow was only detected in X-rays, the GRB was sufficiently localised for Very Large Telescope observations to confirm the likely host is 2MASX J12361286+285858026 <cit.>, a large elliptical galaxy with a UV-measured SFR of <0.2  <cit.>. As shown in Figure <ref>, one of our matched sources, ILT J123613.00+285902.9, is spatially consistent with this galaxy. The other LoTSS source, ILT J123612.26+285929.2, is likely unrelated to GRB 050509B. However, we still evaluate its properties as it could be part of the same cluster as 2MASX J12361286+285858026 and therefore coevolved with it. Upon inspection of the LoTSS mosaics, we also found that the counterpart to GRB 081025, ILT J162131.13+602837.1, appeared to be a double source, as shown in Figure <ref>. From archival 2MASS images, we determined that it was two distinct galaxies with insufficient angular separation to be resolved as individual sources in the LoTSS pipeline. However, the pipeline also outputs the separate Gaussians that make up each source and we were able to identify the two Gaussians, here designated A and B, matching each component of ILT J162131.13+602837.1, both of which individually met the  threshold. We therefore treated these as separate sources in our analysis. §.§.§ Redshifts and physical scales The redshifts of our sample are important for both accurately measuring the properties of the LoTSS sources but for our crossmatching process. Four of the matched GRBs already had measured redshifts, two each from spectroscopy of their afterglows and of their probable hosts. We identified further redshifts from catalogue crossmatches comprising two from SIMBAD and five photometric redshifts from SDSS. Of our 18 matches, therefore, we identify 12 possible redshifts. Based on these redshifts, we calculated both the physical separation of the GRB from the LoTSS source and the apparent physical extent of the sources. These values are also given in Table <ref>. The separations are generally significantly higher than the typical results of a ∼few kpc identified previously for long GRBs <cit.>. However, we note the previous results are derived using optical data, and some differences might therefore be expected, and that our separations typically have relatively large errors. Nevertheless, the more extreme values are likely indicative of inaccuracies in the crossmatching process. On the other hand, the behaviour of short GRBs differs and a more significant separation would often be expected. This is due to the nature of short GRBs, the progenitors of which can be imparted a significant kick velocity through their formation mechanism, and therefore have large separations from their original host. Observationally, these separations are a few kpc greater than those of long GRBs <cit.>. From simulations, the distribution of separations could be expected to peak at around 10 kpc and can reach to over 100 kpc <cit.>. Our matches to GRB 050509B are both consistent with this behaviour. §.§ Core-collapse supernovae sample To provide an additional comparison sample to our putative GRB hosts, we also derived a sample of core-collapse supernovae (CCSNe) host candidates following the same procedure. CCSNe are a well studied population of which long GRBs could represent a subsample. CCSNe are generally found at significantly lower redshifts than long GRBs and typically belong to types Ib, Ic or II. In previous studies, there have been found to be significant differences between hosts of CCSNe and long GRBs <cit.>, however, these differences have been found to vary between types. For instance, long GRBs tend to be located within UV bright regions of their hosts, consistent with high degrees of star formation <cit.>. The resultant offset distribution is more similar to Ib or Ic CCSNe than type II <cit.>. Similarly, Ic and long GRBs both trace optical g-band emission within their hosts with other CCSNe displaying weaker associations <cit.>. For all CCSNe types, the luminosity and morphology distribution of their hosts is also substantially different than for long GRBs. <cit.> found that CCSNe hosts tend to be both more luminous and more regular than long GRB hostss. This supported by <cit.>'s conclusion that CCSN hosts tend towards massive spirals. Compared to the low metallicity nature of long GRB hosts, CCSN hosts are also typically much more metal rich <cit.>. We took the sample of the sadly defunct Open Supernova catalogue[<https://github.com/astrocatalogs/supernovae>] which includes events up to April 8 2022, and selected all supernovae of suggested type Ib, Ic and II. This resulted in 1162 CCSNe within the LoTSS DR2 footprint, which we defined as above. We crossmatched these objects to LoTSS DR2 with a  threshold of 1% which resulted in 877 matches corresponding to 664 SNe, or 57.1% of the sample within the LoTSS DR2 footprint. We found the low redshifts of the CCSNe population to have a significant effect on the crossmatching process. Such nearby sources have correspondingly high flux densities and the angular offsets of the supernovae from the LoTSS sources were found to be significantly greater than those of the GRB sample, in some cases >100". However, these angular offsets still correspond to physical offsets of ∼tens of kpc or less. The nine times larger proportion of CCSNe matches compared to GRB matches is therefore not surprising. The proximity of the CCSNe sample is also likely responsible for the large number of supernovae with multiple LoTSS matches as many supernovae are close enough for their hosts' structure to be resolved. Such galaxies will be detected as multiple sources in the LoTSS pipeline as the islands of flux are sufficiently separated. § SELECTION EFFECTS AND COMPLETENESS §.§ Selection biases It is important to consider the biases inherent in a radio selection of host galaxies. As most long GRBs are associated with star-forming galaxies, it might be expected that radio emission from their hosts is dominated by the synchrotron radiation associated with star formation but that the minimum SFR probed will depend on the survey depth and galaxy redshift. <cit.> found the SFR, ψ, and luminosity at 150 MHz, L_150, to be correlated as L_150 = L_1 ψ_ G^β where L_1 = 10 ^ 22.06 ± 0.01 W Hz^-1 is the luminosity at 150 MHz of a source with an SFR of 1  and β=1.07 ± 0.01. <cit.> also derive SFRs for 150 MHz emission, finding a similar prescription: log L_150 = (0.90 ± 0.01) log (ψ_ S / M_⊙  yr^-1) + (0.33 ± 0.04) log (M_ gal / 10 ^10 M_⊙) + 22.22 ± 0.02 for L_150 in W Hz^-1. We note, however, that this method requires the mass of the galaxy. Equation <ref> can therefore be used to evaluate these selection effects. We used the grey field sources in Figure <ref> to derive an approximate detection limit for LoTSS DR2. We found that only 0.1% of sources have total flux densities <0.30 mJy. We therefore take this to be an appropriate and relatively conservative assumption for the detection limit. Using Equation <ref>, we derive the SFR required to reach this flux density up to z=3.5 and plot it in Figure <ref>. We also compare this function to a sample of long GRB hosts with SFRs estimated through optical or UV methods. Such methods are significantly more impacted by the effects of dust obscuration than radio methods and may therefore result in underestimated SFRs. We would therefore expect radio methods to favour both higher SFRs and lower redshfits. The sample is taken from GRBs in both the catalogue of <cit.> and the live Swift-XRT GRB Catalogue <cit.>. This yields 92 GRB hosts with no crossover with our crossmatched sample, plotted in red in Figure <ref>. We also plot 12 long GRB hosts with radio measured SFRs taken from <cit.>, <cit.> and <cit.>, noting that GRB 051022 is present in both samples with measured SFRs consistent within errors. This comparison demonstrates that radio detections are intrinsically biased towards hosts with atypically high SFRs compared to the UV/optically measured sample. The redshift distribution of sources we might expect to detect also differs from the full population with a clear bias towards lower redshifts, probably due to the SFRs required being significantly less extreme than for most of the sample. We do note, however, that 6 of the 92 UV/optically derived SFRs do reach the threshold. The radio emission of some putative hosts may also be enhanced by components other than star formation. The most obvious source of such contamination is likely to be AGN and we investigate this in Section <ref>. §.§ Completeness The sample of GRB hosts from <cit.> also allow the completeness of our methods to be evaluated. Of the 92 GRBs in the UV/optical sample, six (∼6.5%) GRB hosts do appear to reach the threshold for detection. We also found that two hosts, those of GRBs 051022 and GRB 100816A, lie within the LoTSS DR2 footprint and have UV/optically derived SFRs of 60.0^+12.0_-36.2  <cit.> and 58.0^+51.0_-26.0  <cit.> respectively. GRB 051022 has also been observed at 5.227 GHz which yielded a slightly enhanced SFR of 74.0^+20.0_-20.0  <cit.>. We therefore predict flux densities at 144 MHz using Equation <ref> of 0.30^+0.18_-0.19 mJy for GRB 051022's counterpart and 0.29^+0.28_-0.14 mJy for GRB 100816A's. These values are consistent with being below our estimated detection limit and no such counterparts were identified in the LoTSS DR2 catalogues for either GRB. To confirm these non-detections, we examined the mosaics at the positions of these GRBs and for GRB 100816A, found no evidence of a source. In contrast, we did identify a bright source approximately 16" away from GRB 051022, ILT J235603.13+193633.0, as shown in Figure <ref>. This source was not previously selected by our criteria, due to the well localised nature of GRB 051022 and its XRT error region of 1.5". We note, however, that its  would meet the 1% threshold. We therefore examined the GRB Coordinates Network Circulars for GRB 051022 and found <cit.> reported two radio sources, a point source coincident with the XRT error region and a previously reported optical source <cit.>, and a large extended source to the North West of the error region. Further observations indicated the point source to indeed be the GRB afterglow <cit.>. ILT J235603.13+193633.0 is, however, consistent with the extended source and we therefore conclude it is unrelated to GRB 051022. While we cannot directly measure the SFR of these two hosts, our non-detections do constrain their behaviour somewhat. In particular, the galaxies' radio SFRs cannot be significantly greater than those measured through UV/optical methods. This is consistent with the work of <cit.>, who observed a sample of 11 long GRB hosts at 2.1 GHz and 3 GHz. No hosts were detected and the limits were sufficient to establish that any radio detectable star formation would be only be a factor of two to three greater than that measured by UV/optical methods. While the redshifts of the two non-detected sources are only constrained to z∼0.8, our results agree with 's conclusion that only a small fraction of the star formation in these galaxies may be obscured by dust. The non-detections provide weak constraints on the completeness of our method. Although subject to small number statistics, they suggest that the fraction of long GRB host candidates we might identify in LoTSS DR2 is ≲6.5%, consistent with the 6.3% we find crossmatches for. As discussed in Section <ref> and later, it is also plausible that our methods actually lead to spurious associations and further evaluation and elimination of some host candidates must be performed. § HOST PROPERTIES In this section, we examine the behaviour of our putative GRB hosts to assist with classification and examine how their properties compare to galaxy samples. §.§ Counterparts in other radio surveys To expand the frequency space for our putative hosts, we also performed crossmatches to other radio surveys. This allows us to better examine the spectral behaviour of the sources, in particular, allowing possible identification and classification of any AGN activity. We crossmatched our putative hosts to the Westerbork Northern Sky Survey <cit.>, Faint Images of the Radio Sky at Twenty-centimeters <cit.>, the NRAO Very Large Array Sky Survey <cit.>, Giant Metrewave Radio Telescope (GMRT) 150 MHz Survey <cit.> and the Very Large Array Sky Survey <cit.>. This extended our frequency regime to between 144 MHz and 3 GHz. We performed this crossmatch using VizieR[<https://doi.org/10.26093/cds/vizier>], selecting the nearest source in each table within 20" to maximise the number of plausible matches. However, only five of our LoTSS sources had counterparts across these catalogues, given in Table <ref>. In addition to the results in the survey catalogues, the host of GRB 200716C was also examined at radio wavelengths by <cit.>, who identified flux densities in the archival data of these surveys. We include their results when calculating the properties of this galaxy. §.§ Radio Spectral behaviour Following <cit.>, the LoTSS band can be divided into three 16 MHz wide bands centred at 128, 144 and 160 MHz. We downloaded the primary beam corrected images for each of these bands and extracted the flux densities for our putative hosts using the PyBDSF[<https://www.astron.nl/citt/pybdsf/index.html>] package, setting both the and parameters to 3 (i.e. a detection significance of 3-σ). The flux densities were then fitted with a power law model, F_ν = k ν^α, to derive spectral indices, given in Table <ref> as . The varying significance of each source across the LoTSS bandwidth means that the source was not necessarily detected in each band and therefore we could only derive spectral indices for seven sources. We attempted to increase this number by lowering the significance thresholds but found that the resultant spectral indices were unphysically extreme, likely indicative of noise being extracted rather than real sources. In general, we found our values for  to be poorly constrained, likely due to large errors in the flux density measurements and the relatively narrow frequency range. We note, however, that this is consistent with the findings of <cit.>. For the sources with counterparts at GHz frequencies, we refitted the power law model but included both the flux densities given in those catalogues and the flux density across the full LoTSS band centred at 144 MHz. The resulting spectral indices are given in Table <ref> as  and our sources' fitted SEDs are shown in Figure <ref>. For most sources with both  and , we found that both values for α are reasonably consistent. However, in the cases of ILT J123612.26+285929.2 and ILT J130402.61+293839.3, the  and  indicate opposite behaviour for the spectrum. In this case, as with other sources with both  and , we are inclined to trust  more due to  being poorly constrained, as mentioned above. It is also plausible that the true SED has a more complex structure than the single power law assumed and, for instance,  and  capture separate parts of a broken power law. Measurements with the LOFAR low-band antennas (LBA) may be useful in further constraining any spectral turnover. §.§ IR behaviour We also examined the infrared colours of the galaxies in the Wide-field Infrared Survey Explorer (WISE) bands. These colours are widely used to assist with classifying galaxies and help evaluate any AGN activity. We matched our sources to the closest objects in the AllWISE catalogue <cit.> finding that all such matches had a <1%. We used the criteria of <cit.> and <cit.>, to broadly classify galaxies into AGN, luminous IR galaxies (LIRGs), ultra-luminous IR galaxies (ULIRGs), star-forming and elliptical classes. We tabulate the W1-W2 and W2-W3 colours in Table <ref> and plot them in Figure <ref>. We also derived WISE colours for the CCSNe sample, also shown in Figure <ref>. We found that all types of CCSNe follow similar behaviour with no significant differences between them. This behaviour is also compatible with the distribution exhibited by our GRB host candidates although the sources matched to GRBs are more likely to have colours consistent with AGN activity. We further examine such activity in the following section. According to our criteria, both the GRB and CCNSe samples are dominated by ULIRGs. While it might be expected that a greater proportion of each sample would be star-forming galaxies, LIRGs and ULIRGs are common radio-selected hosts for these transients <cit.>. The nature of these galaxies enhances star formation, with the star formation efficiency increased by a factor of 2 – 3 compared to the general galaxy population. During an extreme starburst, this enhancement can reach up to an order of magnitude <cit.>. Other radio studies of GRB hosts have found them to be consistent with LIRGs or ULIRGs and identified high SFRs of order 50 – 200  <cit.>. These galaxies were also found to be significantly lower luminosity relative to their SFRs than field galaxies, a result consistent with other studies <cit.>. This is compatible with the suggestion that there is a metallicity bias or cutoff in long GRB hosts. These high SFRs and low metallicities are ideal for creating an environment rich with collapsars, the progenitors of both long GRBs and CCSNe. There could also be systematic reasons why ULIRGs are so dominant. It is likely that our criteria over simplify the complex behaviour of the galaxies and therefore neglect the significant overlap between different classifications. In addition, as discussed in Section <ref>, there are significant selection effects inherent to a radio search. This biases towards selecting a different population of galaxies than optical surveys for instance, and such a sample would be expected to include a greater proportion of ULIRGs than the overall population of galaxies. §.§ Star formation The star formation history of a galaxy has a significant effect on the rate of GRBs that take place within it. In particular, long GRB progenitors are massive and short-lived stars so the hosts of such GRBs are generally found to have significantly higher SFRs than the majority of the field galaxy population. Here, we examine these properties of our putative hosts. §.§.§ Star-forming galaxies The spectral index of a galaxy can be indicative of whether it is star-forming and we therefore compare our measured spectral indices to those expected of star-forming galaxies. The radio SEDs of such objects are typically expected to be a superposition of two power laws, a thermal and a non-thermal component <cit.>: S_ tot(ν) = S_ th(ν_0)(ν/ν_0)^-0.1 + S_ nth(ν_0)(ν/ν_0)^α_ nth The thermal fraction is estimated to be f_ th∼10% <cit.> and hence our measured  and  should be comparable or slightly smaller in absolute terms than α_ nth <cit.>. The radio spectral index of star-forming galaxies is typically thought to be α_ nth∼-1.0 for the non-thermal index and ∼-0.8 for the total index <cit.>. However, this simple picture of superimposed power laws does not appear consistent with observations at lower frequencies. For instance, the sample of galaxies examined by <cit.> displayed a broken or exponentially declining power law spectrum, with the break or decline occurring between 1–12 GHz, as α_ nth varies with frequency. The resultant power law at lower frequencies is significantly shallower than at higher frequencies, with a spectral index of ∼-0.6. This behaviour has also been identified in studies that reach even lower to MHz frequencies <cit.>. From the spectral indices in Table <ref>, the most likely candidates for star-forming galaxies are ILT J130402.61+293839.3, ILT J130635.93+415811.3 and ILT J123612.26+285929.2. From their WISE colours, only ILT J130402.61+293839.3 was consistent with being star-forming while the other two sources had been categorised as ULIRG/AGN, although high star formation is still likely to be present as discussed above. There were two other sources classified as star-forming by their WISE colours, ILT J081553.01+302035.9 and ILT J133144.19+350305.6. Unfortunately, these sources' spectral indices are either unavailable or poorly constrained due to a low number of data points. It is therefore difficult to determine if the radio evidence independently points towards them being star-forming. §.§.§ Star formation rates with LoTSS flux densities To derive SFRs for our matched sources with redshifts, we returned to Equations <ref> and <ref>. As noted above, Equation <ref> requires the mass of the host to be known which is only the case for ILT J123613.00+285902.9 <cit.>, and an upper limit for ILT J075839.09+325133.9 <cit.>. We were therefore able to derive 12 SFRs using Equation <ref> and two using Equation <ref>. Where we have both estimates, we find they are generally comparable. The remaining discrepancies may due to the sensitivity of Equation <ref> to the galaxy mass and therefore the accuracy of its measurement. We found that three of our matched sources (ILT J075839.09+325133.9, ILT J133144.19+350305.6 and ILT J144453.38+491305.9), in addition to a significant proportion of the general population, exhibit apparent extreme star formation rates of several thousand  or greater. There are examples of such high values in previous work, such as the estimate of SFR > 1000  for the high redshift GRB 090404 <cit.>. There is also a possibility of afterglow contamination, as suggested for the case of GRB 100814A <cit.>, but it is unlikely that these sources are significantly affected. Such high values could indicate incorrectly assigned redshifts rather than the hosts, or significant contributions from other emission sources. Recent work using LoTSS to examine the cosmic star formation history showed that significant scatter in the relationship between L_150 and SFR is likely the result of AGN <cit.>. It is therefore probable that these apparently extreme SFRs are actually AGN rather than star formation emission. To provide a comparison, we also derived SFRs for the CCSNe sample and the InactiveField sample using Equation <ref>. To account for nearby resolved CCSNe host galaxies, we summed the SFRs of all LoTSS counterparts for each supernova. To compare the GRB hosts to our CCSNe and general sample, we set an upper limit on the SFR of 200 . This limits our sample to the SFRs most likely to be real for both for the GRB hosts and the comparison samples. Excluding the possible hosts of the short GRB 050509B, we found that the long GRB hosts have a mean SFR of 66.8 ± 38.2 , while the general inactive galaxy population has a mean SFR of 18.2 ± 33.1 . It is clear that our putative long GRB hosts have significantly greater apparent SFRs than most galaxies in the field, although AGN contamination could still contribute to apparently enhanced radio flux, a factor we address in Section <ref>. We note also that the InactiveField's inferred SFR distribution is somewhat higher than might be expected for a general galaxy population, which may be indicative of incorrect crossmatching, redshift assignments or lingering AGN contamination. The CCSNe hosts have even lower SFRs than either of the other two samples at 8.5 ± 23.3 , possibly due to their small redshifts, and therefore their low luminosities and being resolvable into multiple components. This low SFR behaviour was again common to all types of CCSNe. The results for the matches to the short GRB 050509B are more surprising, specifically ILT J123613.00+285902.9. For this source, both our methods indicate SFRs of tens of , much greater than the UV measured <0.2  of <cit.>. While dust obscuration is likely to cause some discrepancy between measurements in these two regimes, it is implausible that two orders of magnitude difference is possible. If the radio is, indeed, dominated by star formation, it is therefore likely that ILT J123613.00+285902.9 is not the radio counterpart to the optically-assigned large elliptical galaxy 2MASX J12361286+285858026 despite their spatial consistency. Alternatively, it is possible that this source is dominated by AGN emission and we discuss this further below. §.§.§ Star formation rates with flux densities from other radio surveys For those sources with higher frequency counterparts, we can also use the prescription of <cit.> to derive SFRs, similarly to <cit.>. Using the luminosity of the source at 1.4 GHz, L_1.4, in W Hz^-1, the SFR is found to be ψ_ B = 5.52 × 10^-22 L_1.4, L_1.4 > L_c 5.52 × 10^-22/(0.1 + 0.9 (L_1.4 / L_c)^0.3L_1.4, L_1.4≤ L_c where L_c = 6.4× 10^21 W Hz^-1 is some critical 1.4 GHz luminosity. Four sources have both higher frequency counterparts and associated redshifts allowing us to evaluate their SFRs with Equation <ref>. We found that there was reasonable agreement (a factor of a few) for two of these sources, ILT J130402.61+293839.3 and ILT J123612.26+285929.2, but there were much more significant discrepancies for ILT J081553.01+302035.9 (a factor of ∼9) and ILT J164720.21+434437.0 (a factor of ∼40). We note that the spectrum of J081553.01+302035.9 is significantly flatter than a typical star-forming galaxy, but other evidence, such as the WISE colours do point towards this source as being star-forming. It is therefore unclear why such a large discrepancy is present. ILT J164720.21+434437.0, on the other hand, is an AGN candidate and it is likely that the higher frequency flux density and therefore this SFR measurement is actually dominated by AGN activity rather than star formation. §.§ AGN contamination A large proportion of the radio source population is made up of active galaxies and as such they represent a significant possible contaminant in our sample of putative hosts. We therefore investigated our sample to determine whether any sources were likely to actually be AGN and therefore most probably unrelated to the GRBs. §.§.§ Literature candidates LoTSS has previously been extensively investigated for AGN and we initially compared our source list to the AGN catalogues for HETDEX Spring Field of LoTSS DR1 <cit.> and LoTSS Deep Fields Data Release 1 <cit.>. No crossmatches were identified and we therefore also performed a wider literature search. This identified ILT J164720.21+434437.0 <cit.> and ILT J140245.38+481150.7 <cit.>, the putative hosts of GRB 191101A and GRB 201229A respectively, as having possible AGN counterparts. While the separation of GRB 191101A indicates that this is unlikely to be an accurate match, the crossmatch between GRB 201229A and ILT J140245.38+481150.7 inspires much more confidence. <cit.> suggest that ILT J140245.38+481150.7 is an AGN due to a combination of radio/IR emission but a lack of optical detections and find such methods to be consistent with other AGN selection criteria. However, when we perform similar analysis below, we find that it differs in its behaviour from the vast majority of AGN while its WISE colours are consistent with both AGN and ULIRG classifications. If this were an AGN, the small separation could mean that this is the first long GRB to be found to be associated with an active galaxy. §.§.§ Radio behaviour The morphology of a radio source can indicate the presence of an AGN. In particular, irregular morphologies or the physical size of a source can be the result of activity. This extends to the LoTSS frequencies, as shown for Fanaroff-Riley class galaxies in the LoTSS-Deep field <cit.>. None of our sources have physical extents comparable to 's sample, however, and in general our sources' morphologies are significantly more regular. There a few exceptions, namely ILT J162131.13+602837.1, ILT J10543713+690416.8 and ILT J123613.00+285902.9. We have already determined that ILT J162131.13+602837.1's morphology is the result of two galaxies being close enough to be unresolved in LoTSS DR2, however, the other sources are both plausibly AGN dominated. In particular, the irregular and elongated morphology of ILT J123613.00+285902.9 is consistent with the presence of a radio jet. Elliptical galaxies are not uncommon hosts for low power radio galaxies and it is possible that 2MASX J12361286+285858026 is the host of ILT J123613.00+285902.9. However, there are other optical sources spatially consistent with ILT J123613.00+285902.9 as shown by <cit.>. The middle panel of their Figure 1 shows the field with 2MASX J12361286+285858026 subtracted out and a new source identified to its North. While do not examine this source in greater detail and it is unclear whether it is foreground or background, we encourage further investigations to determine whether it is the true counterpart to ILT J123613.00+285902.9. The radio spectrum of a galactic source can also be a clue as to its activity. The canonical radio spectral index for AGN is typically taken to be ∼ -0.7 <cit.> and the low-frequency samples examined by <cit.> and <cit.> in addition to the AGN sample in LoTSS DR1 appeared consistent with this value <cit.>. However, as the flux density of an AGN at 1.4 GHz decreased, its spectral index became significantly shallower. Four of our sources with 1.4 GHz detections have indices consistent with the canonical value (ILT J130402.61+293839.3, ILT J130635.93+415811.3, ILT J081553.01+302035.9 and ILT J123612.26+285929.2). As previously mentioned, ILT J164720.21+434437.0's spectral index is sensitive to the fit and it is plausible that this is also consistent. The poorly constrained nature of  means we cannot firmly determine whether any of the remaining sources are also consistent. However, a great deal of diversity around the canonical spectral index has been observed across the AGN population. For instance, peaked spectrum (PS) sources have spectra with distinct peaks and steep drops around them <cit.>, possibly as the result of synchrotron self-absorption or free-free absorption <cit.>. Such sources include GHz peaking sources (GPS) and compact steep spectrum (CSS) sources which peak at lower frequencies of a few hundred MHz. This means that in the regime probed by LOFAR and the catalogued data, they can actually display positive (by our convention) or flat spectral indices <cit.>. The turnover suggested in the SED of ILT J130635.93+415811.3 could be due to it being such a CSS source. There are other classes of AGN that could display diverging spectral behaviour, such as AGN with emission primarily arising from advection dominated accretion flows <cit.>. Finally, low luminosity jets can also result in flat spectra <cit.>. We also note that the canonical spectral index for AGN is very similar to that of starforming galaxies <cit.>, further diluting spectral behaviour as an indicator of AGN behaviour. It is therefore difficult to determine which sources could be AGN solely from their spectral indices and the majority of our host candidates display behaviour which could be consistent with an AGN origin. We therefore return to the infrared behaviour of the sources in the following section. §.§.§ WISE colours and The WISE colours detailed in Section <ref> can be an indicator of AGN and seven of the sources in our sample were found to have colours consistent with AGN classification. However, only one source (ILT J164720.21+434437.0) is sufficiently constrained to ensure this classification. We note also that the W1 - W2 criterion does not necessarily confirm a lack of AGN behaviour, as some classes of AGN such as Seyferts may lie below it. There are also significant evolutionary effects with redshift that can greatly change a source's position on this diagram <cit.>. It is therefore possible that sources with redshifts of z>1 are misclassified. For our sample, this includes ILT J075839.09+325133.9, ILT J133144.19+350305.6 and ILT J144453.38+491305.9. We therefore investigate further criteria to more fully examine possible AGN contamination. The ratio of IR to radio emission can also indicate the presence of AGN behaviour <cit.>, q_ IR = logS_ IR/S_ radio where S is the flux density in a given filter or at a given frequency. Radio AGN generally have significantly lower values of  than star forming galaxies which also display an evolution with redshift <cit.>. While typically longer wavelengths and higher frequencies are employed for S_ IR and S_ radio respectively, a significant AGN correlation could still be expected for the WISE bands and 144 MHz LoTSS frequency we have available. We calculated  for each source for each available WISE band counterpart and show these in Table <ref>. We also calculated  for the ActiveField and InactiveField samples and compare them to our matched sources in Figure <ref> in the red shading and greyscale contours respectively. For our putative hosts without known redshifts, we assumed z=1. While the peak of the long GRB redshift distribution is ∼2.2 <cit.>, we choose a lower value to account for the inherent bias in a radio search towards lower redshift sources. We find that the majority of AGN do exhibit smaller values of  than the field galaxies, although there is significant crossover and the redshift distribution of the field galaxies is much smaller than that of the AGN. There is also the possibility of incomplete or inaccurate classifications in SIMBAD. Nevertheless, it appears that most of our GRB host candidates exhibit different behaviour to the vast majority of the AGN sample. While this does not preclude any of them from being AGN, it does imply that they would be outliers to the main AGN population. However, there are noticeable outliers to the other GRB hosts in the top two panels of Figure <ref>. These are ILT J130635.93+415811.3, ILT J162131.13+602837.1 A and ILT J123612.26+285929.2 associated with GRBs 190211A, 081025 and 050509B respectively. There is significant evidence that both ILT J130635.93+415811.3 and ILT J123612.26+285929.2 are indeed AGN and their selection was due to the extreme flux densities induced by their active nature. However, it is more difficult to confirm if ILT J162131.13+602837.1 A is an AGN due to a lack of redshift constraint. However, from the measured fluence of GRB 081025, we expect z∼2.5 for E_γ, iso∼10^53 erg, typical for a long GRB, or z∼0.4 for E_γ, iso∼10^51 erg, if GRB 081025 is an extremely low luminosity GRB. While this lower redshift could place it towards the higher density AGN parameter space, the more probable higher redshift is inconsistent with the majority of AGN. We also note it is less of an outlier to our putative host distribution than the other two sources. § FINAL CANDIDATE HOST SAMPLE We now refine the initial list of associations given in Table <ref> to define a final sample of GRBs and their hosts that are most likely accurate crossmatches. From our original sample in Table <ref>, the large angular or physical separations between 10/19 possible associations (GRB 190211A/ILT J130635.93+415811.3, GRB 081025/ILT J162131.13+602837.1 B, GRB 071020/ILT J075839.09+325133.9, GRB 110521A/ILT J080031.58+454950.2, GRB 060206/ILT J133144.19+350305.6, GRB 140808A/ILT J144453.38+491305.9, GRB 191101A/ILT J164720.21+434437.0, GRB 080916B/ILT J105437.13+690416.8 and GRB 050509B/ILT J123612.26+285929.2) casts doubts on these being accurate counterparts. Two of these GRBs also have crossmatches at lower separations (GRB 081025/ILT J162131.13+602837.1 A and GRB 050509B/ILT J123613.00+285902.9) which are more likely to be the true counterpart. In addition, a significant number of sources are likely to be AGN and therefore most likely unrelated to the GRBs. While we cannot necessarily ensure any of our crossmatches aren't subject to AGN contamination, 7/19 candidates (GRB 190211A/ILT J130635.93+415811.3, GRB 110521A/ILT J080031.58+454950.2, GRB 080507/ILT J153442.14+562609.0, GRB 140808A/ILT J144453.38+491305.9, GRB 191101A/ILT J164720.21+434437.0, GRB 050509B/ILT J123613.00+285902.9 and GRB 050509B/ILT J123612.26+285929.2) are most probably AGN related. Eliminating these sources therefore leaves a final sample of eight matches, all for long GRBs, given in Table <ref>. We note that these criteria also eliminate the sources with extreme SFRs ≳200  implying that these were likely due to inaccurate crossmatching and therefore redshift assignments or that their radio flux is actually AGN dominated. The redshift distribution of this sample is low compared to the majority of the long GRB population. This is bias is consistent with that expected from selection effects, as discussed in Section <ref>. In terms of offsets, the angular offsets for the majority of this sample are less than 10", the only exception being between GRB 081025 and ILT J162131.13+602837.1 A at ∼16". These lead to normalised offsets of ≤6.6, consistent with the findings of <cit.>, while the mean physical offset is 31.3±15 kpc. This is larger than would typically expected for a long GRB host correlation, however, we note there are significant errors on the individual physical separations. Similarly to the full sample of our cross-matched sources, this final sample is dominated by ULIRGs. Two sources have radio spectral indices and IR colours consistent with star-forming galaxies, although we note that our sources' spectral indices are generally poorly constrained. We find a mean SFR of 59.2 ± 36.1 , significantly greater than that exhibited by the general population population of field galaxies and the CCSNe host sample. § CONCLUSIONS We have presented the results of a search for GRB host galaxies using the recent LoTSS DR2 catalogues. Using the density of sources in LoTSS DR2 to evaluate , we identified 18 sources matched to 17 GRBs. This process indicated crossmatches at relatively large angular and normalised separations. This was likely to be partly an effect of the differences between optical and radio galaxy morphology but also implied some inaccurate crossmatches. We further evaluated the properties of the sources using both LoTSS data and that available in other catalogues. We found that a majority of our sources are consistent with ULIRG classifications, while a small minority are more likely to be AGN and unrelated to the matched GRBs. Our comparison CCSN host sample was also dominated by ULIRGs. We evaluated the star formation of our long GRB host sample, finding that they exhibited SFRs significantly higher (mean 66.8 ± 38.2 ) than that of both field galaxies (mean 18.2 ± 33.1 ) and the CCSN host sample (mean 8.5 ± 23.3 ). Based on both the results of our crossmatching process and the properties of the sources themselves, we have identified a final sample of eight crossmatches that are likely to be accurate host galaxy-GRB pairings. This sample consists entirely of long GRBs and is again dominated by ULIRGs exhibiting enhanced star formation (mean 59.2 ± 36.1 ). Future observations by LOFAR will expand this sample over the coming years while also reaching to even lower frequencies with the low-band antennas. This will enable a more complete picture of GRB hosts in this regime to be developed and determine how their properties influence and drive the formation of GRB progenitors. § ACKNOWLEDGEMENTS We thank Emily Eyles-Ferris, Klaas Wiersema and Beatriz Mingo for invaluable discussion and insight. We also thank the anonymous referee for their useful comments and improvements. RAJEF acknowledges funding from the UK Space Agency and the European Union’s Horizon 2020 Programme under the AHEAD2020 project (grant agreement number 871158). RLCS acknowledges STFC support. This work made use of data supplied by the UK Swift Science Data Centre at the University of Leicester; the VizieR catalogue access tool, CDS, Strasbourg, France (DOI : 10.26093/cds/vizier); the Wide-field Infrared Survey Explorer, which is a joint project of the University of California, Los Angeles, and the Jet Propulsion Laboratory/California Institute of Technology, funded by the National Aeronautics and Space Administration. Analysis of GRB 050509B is based in part on observations collected at the European Southern Observatory, Paranal, Chile (ESO program 075.D-0261, PI J. Hjorth). LOFAR is the Low Frequency Array designed and constructed by ASTRON. It has observing, data processing, and data storage facilities in several countries, which are owned by various parties (each with their own funding sources), and which are collectively operated by the ILT foundation under a joint scientific policy. The ILT resources have benefited from the following recent major funding sources: CNRS-INSU, Observatoire de Paris and Université d'Orléans, France; BMBF, MIWF-NRW, MPG, Germany; Science Foundation Ireland (SFI), Department of Business, Enterprise and Innovation (DBEI), Ireland; NWO, The Netherlands; The Science and Technology Facilities Council, UK; Ministry of Science and Higher Education, Poland; The Istituto Nazionale di Astrofisica (INAF), Italy. This research made use of the Dutch national e-infrastructure with support of the SURF Cooperative (e-infra 180169) and the LOFAR e-infra group. The Jülich LOFAR Long Term Archive and the German LOFAR network are both coordinated and operated by the Jülich Supercomputing Centre (JSC), and computing resources on the supercomputer JUWELS at JSC were provided by the Gauss Centre for Supercomputing e.V. (grant CHTB00) through the John von Neumann Institute for Computing (NIC). This research made use of the University of Hertfordshire high-performance computing facility and the LOFAR-UK computing facility located at the University of Hertfordshire and supported by STFC [ST/P000096/1], and of the Italian LOFAR IT computing infrastructure supported and operated by INAF, and by the Physics Department of Turin university (under an agreement with Consorzio Interuniversitario per la Fisica Spaziale) at the C3S Supercomputing Centre, Italy. § DATA AVAILABILITY LOFAR LoTSS data are available at <https://lofar-surveys.org/>. Swift-XRT data are available at <https://www.swift.ac.uk>. ESO VLT data are available at <https://archive.eso.org/>. The WISE, NVSS, WENSS, FIRST, VLASS and other catalogued data used in this work are available via the VizieR library at <https://vizier.cds.unistra.fr>, DOI : 10.26093/cds/vizier. mnras § SOURCE IMAGES § SPECTRAL ENERGY DISTRIBUTIONS
http://arxiv.org/abs/2307.01032v2
20230703140530
Constructible Witt theory of schemes
[ "Onkar Kamlakar Kale", "Girja S Tripathi" ]
math.AG
[ "math.AG", "math.KT" ]
alpha We study the constructible Witt theory of étale sheaves of Λ-modules on a scheme X for coefficient rings Λ having finite characteristic not equal to 2 and prime to the residue characteristics of the scheme X. Our construction is based on the recent advances by Cisinski and Deglise on six-functor formalism for derived categories of étale motives and offers a background for the of study of constructible Witt theory as a cohomological invariant for schemes. Möbius Homology [ =============== § INTRODUCTION Witt groups of sheaves of modules on topological spaces have been considered as a generalised cohomology theory by Woolf in <cit.> and by Woolf and Schürmann in <cit.>. These provide signature-type invariants for topological spaces taking values in Witt theories related to the coefficient ring and have descriptions as symmetric forms on intersection cohomology complexes. This work grew out of our interest in the analogous algebraic setting particularly with an interest in algebraic counterpart for the interpretation of L-classes as stable homology operations from (topological) constructible Witt groups to the ordinary rational homology. We consider the derived categories of constructible étale sheaves of modules over suitable coefficient rings and use Balmer's theory of triangular Witt groups to define constructible Witt groups: Given a scheme X and a ring Λ having finite characteristic not equal to 2 and prime to residue characteristics of X, we define the constructible Witt groups W^i_c(X_ét,Λ) as the Witt groups of triangulated category with duality (D^b_ctf (X_ét,Λ), D_X(T)) consisting of étale sheaves of Λ-modules having finite Tor-dimension and constructible cohomology sheaves. As shown in the last section of the paper, with the identification of W^i_c((ℝ)_ét,Λ) with a ℤ/2ℤ-equivariant Witt theory of finitely generated projective Λ-modules, an incentive to consider constructible Witt theory is its identification with certain equivariant Witt theories related to the coefficient rings. The algebraic analog of signature in <cit.> can be constructed by using the proper pushforwards for constructible Witt theory in Theorem <ref>. Let X be a projective real algebraic variety with the structure morphism f:X→ℝ and Λ a ring of finite characteristic not equal to 2. Then we have the induced proper pushforward for constructible Witt theory W^i(f_*): W^i_c(X_ét, Λ)→ W^i_c((ℝ)_ét, Λ)=W^i_lf(Λ[ℤ/2ℤ]) where W^i_lf(Λ[ℤ/2ℤ]) denotes a ℤ/2ℤ-equivariant Witt theory of finitely generated projective Λ-modules. It can be observed that the theory of étale sheaves on ℝ is the only one that can be related to an equivariant theory frequently studied, namely, the ℤ/2ℤ-equivariant theory since this is the only case in which the absolute Galois group is a familiar one (the cyclic group of order 2). In this paper we identify the constructible Witt theory of sheaves of Λ-modules for ℝ with the Witt theory of the group ring Λ[ℤ/2ℤ]. In general, over a field k having characteristic prime to the characteristic of the ring Λ the groups W^i_c( k,Λ) should be related to a Witt theory of finitely generated projective Λ-modules with an action of the absolute Galois group Gal(k^sep/k) (a profinite group) of the separable closure of k. Even with widening interest in equivariant theories we are not aware of any studies in Witt theory for actions of pro-finite groups and the work in this paper should encourage more research on this. Now we describe the contents of the paper in details. In section <ref> we quickly recall the Witt theory of triangulated categories with duality. In section <ref> the derived category D^b_ctf (X_ét,Λ) of étale sheaves of Λ-modules having finite Tor-dimensions and constructible cohomology sheaves is described as a triangulated category with duality. Under appropriate restrictions these categories have been widely studied for a long time in étale cohomology theory. In this paper we use the recent advances made by Cisinski and Déglise on étale motives to understand the category D^b_ctf (X_ét,Λ) by using an equivalent model DM_h,lc(X,Λ) of locally constructible h-motives. The categories DM_h,lc(X,Λ) offer a greater generality for the six-functor formalism and also a roadmap for studying the relations between constructible Witt groups and the rational motivic cohomology. In section <ref> we use the work of Cisinski and Déglise to recast the relevant results for the bounded derived category D^b_ctf ((-)_ét,Λ) and using these in section <ref> we develop the constructible Witt theory. Let B be an excellent noetherian scheme of dimension ≤ 2 and Λ a noetherian ring of positive characteristic prime to the residue characteristics of B. Let ϕ:S→ B be a regular separated finite type B-scheme and f:X→ S a separated morphism of finite type. The categories D^b_ctf(X_ét,Λ)⊂ D^b(X_ét,Λ) are closed under the six-functors of Grothendieck in D^b(X_ét,Λ). For a ⊗-invertible object T∈ D^b_ctf(S_ét,Λ) the functor D_X(T)=Rℋom (-, f^!(T)): D^b_ctf(X,Λ)^op→ D^b_ctf(X,Λ) is a duality functor on D^b_ctf(X_ét,Λ). In particular, we have triangulated categories with duality (D^b_ctf(B_ét,Λ), Rℋom(-, Λ)) and (D^b_ctf(S_ét,Λ), Rℋom(-, Λ)) Under the same assumptions on B, Λ and S as in the theorem above, let f: X→ S and g: Y→ S be separated S-schemes of finite type, and T be a ⊗-invertible object in D^b_ctf(S_ét,Λ). Then an ëtale morphism h:X→ Y of S-schemes induces a morphism of triangulated categories with duality h^*: (D^b_ctf(Y_ét,Λ), g^!T) → (D^b_ctf(X_ét,Λ), f^!T) and homomorphisms h^*:W^i_c(Y_ét, Λ)→ W^i_c(X_ét,Λ) of constructible Witt groups. If the morphism h:X→ Y is proper, then the pushforward morphism h_*: (D^b_ctf(X_ét,Λ), f^!T) → (D^b_ctf(Y_ét,Λ), g^!T) is a morphism of triangulated categories with duality and induces pushforward morphisms on constructible Witt groups h_*:W^i_c(X_ét, Λ)→ W^i_c(Y_ét,Λ). In the last section <ref> we provide some details about the description of constructible Witt theory of fields in terms of an equivariant Witt theory of finitely generated Λ-modules with Galois action. For a ring Λ of finite characteristic not equal to 2 the bounded derived category D^b_ctf((ℝ)_ét, Λ) is equivalent to bounded derived category D^b(Proj(Λ[ℤ/2ℤ])) of finitely generated projective Λ-modules with an action of the absolute Galois group Gal(ℂ/ℝ)=ℤ/2ℤ, and have an identification of constructible Witt groups W^i_c((ℝ)_ét, Λ) with a ℤ/2ℤ-equivariant Witt groups of finitely generated projective Λ-modules. Conventions. In the paper the coefficient ring Λ will be assumed to be of finite characteristic not equal to 2 and all the schemes considered will have the property that their residue characteristics are prime to the characteristic of Λ. All the schemes will be assumed to be noetherian and of finite Krull dimension. Acknowledgements. We are thankful to K. Arun Kumar for helpful discussions during the preparation of this manuscript. § WITT THEORY OF TRIANGULATED CATEGORIES WITH DUALITY In this section we recall the basic theory of Witt groups for triangulated categories with duality developed by Balmer in <cit.>, <cit.>, <cit.>. §.§ Triangulated categories with duality Let δ =± 1. For triangulated categories (K_1,T_1) and (K_2, T_2), an additive functor F:K_1→ K_2 is said to be δ-exact if T_2^-1∘ F = F∘ T_1 and if for any exact triangle A B CT_1(A) the following triangle is exact: F(C) F(B) F(A)T_2(F(C)) Let (K, T) be a triangulated category and δ =±1. We will always assume that 1/2∈ K. A δ-duality is a δ-exact contravariant functor #:K→ K such that there exist an isomorphism ω : Id#∘# satisfying the conditions: ω_T(M) = T(ω_M) and (ω_M)^#∘ω_M^# = Id_M^# for any object M of K. Then the triple (K, #, ω ) is called a triangulated category with δ-duality. In case δ=1, we shall talk about a duality and in case δ=-1, we shall use the word skew-duality. Let (K, #, ω) be a tringulated category with δ-duality (δ=±1). * Let n∈ℤ. Then (K, T^n∘#, ω) is again a triangulated category with ((-1)^n·δ)-duality. * Also, (K, #, -ω) is again a tringulated category with δ-duality. Consider a triangulated category (K, #, ω) with δ-duality (δ=± 1). A symmetric space is a pair (P,ϕ) such that P is an object in K and ϕ:P P^# is an isomorphism such that ϕ^#∘ω_P=ϕ. A skew-symmetric form is a pair (P,ϕ) such that P is an object in K and ϕ:P P^# is an isomorphism such that ϕ^#∘ω_P=-ϕ. A skew-symmetric form in K is a symmetric form in (K,#,-ω). Orthogonal sum and isometries are defined as usual. Let (K, #, ω) be a triangulated category with δ-duality (δ=± 1). Then as suggested by the above example, the translated (or shifted) structure of triangulated category with (-δ)-duality is T(K, #, ω) (K, T∘#, (-δ)·ω). Also, if (K, #, ω) is a triangulated category with duality (i.e. δ=+1) then T^n (K, #, ω)=(K, T^n ∘#, (-1)^n(n+1)/2·ω) is a triangulated category with (-1)^n-duality, for all n∈ℤ. §.§ Witt groups of triangulated categories with duality In his 1937 paper <cit.>, Ernst Witt introduced a group structure and even a ring structure on the set of isometry classes of anisotropic quadratic forms, over an arbitrary field k. This object is now called the Witt group W(k) of k. Since then, Witt’s construction has been generalized and extended in many ways. In this paper we work in the setting of triangulated categories with duality developed by Balmer. Let (K, #, ω) be a triangulated category with δ-duality (for δ=±1) and 1/2∈ K. Let (P,ϕ) be a symmetric space. A pair (L,α) where L is an object of K and α: L→ P is a morphism, is called sublagrangian of (P,ϕ) if α^#ϕα=0. A triple (L,α, w) is called Lagrangian if the following triangle is exact: T^-1(L^#)LPL^#andif w is δ-symmetric, i.e. T^-1(w^#)=δ· w. Let (K, #, ω) be a triangulated category with δ-duality (for δ=±1). A symmetric space (P, ϕ) is neutral or metabolic if it possesses lagrangian. Let (K, #, ω) be a triangulated category with δ-duality (for δ=±1). We define the Witt Monoid of K to be the monoid of isometry classes of symmetric spaces endowed with the diagonal sum and we denote it by MW(K, #, ω). The set of isometry classes of neutral spaces forms submonoid NW(K, #, ω)⊂ MW(K, #, ω). The quotient monoid is a group, called the Witt Groups of (K, #, ω) and written as W(K, #, ω). Thus, W(K, #, ω) = MW(K, #, ω)/NW(K, #, ω) If (P,ϕ) is symmetric space, we write [P,ϕ] for its class in the corresponding Witt group. We say that two symmetric spaces are Witt-equivalent if their classes in the Witt group are same. We also define W^n (K, #, ω) W(T^n (K, #, ω)) for all n∈ℤ. These are the shifted Witt groups of (K, #, ω). §.§ Functoriality Let (K_1, #_1, ω_1) and (K_1, #_2, ω_2) be triangulated categories with duality. Assume that #_1 and #_2 are both either exact or skew exact. A morphism of triangulated categories with duality refers to a covariant additive functor F: K_1→ K_2 that preserves exactness or skew-exactness, and satisfies the following conditions: F∘#_1 = #_2∘ F and F(ω_1)=ω_2. In this case, if (P,ϕ) is a symmetric space for #_1 then (F(X),F(ϕ)) is a symmetric space for #_2. If (L,α,w) is a lagrangian of the starting space, then (F(L),F(α),δ· F(w)) is a lagrangian of its image, where δ=± 1 comes from δ-exactness of F. This tells that neutral forms are mapped to neutral forms. Hence F induces a group homomorphisms W^n(F): W^n(K_1, #_1, ω_1) → W^n(K_1, #_2, ω_2) for any n∈ℤ. § DERIVED CATEGORY OF CONSTRUCTIBLE SHEAVES In this section we recall some well-known results for the full triangulated subcategory D^b_ctf(X_ét, Λ) of the derived category D(X_ét, Λ) of étale sheaves of Λ-modules on the small étale site X_ét. We describe the category D^b_ctf(X_ét, Λ) as a triangulated category with duality and discuss localization sequences. In the next section applying the general machinery of Balmer the constructible Witt-theory of a scheme X with Λ-coefficients will be defined and studied via this triangulated category with duality. For the description of duality and localization we use the recent work of Cisinski and Déglise in <cit.>: The advantage of technical foundations in terms of étale motives is a greater generality of the results that otherwise will need more restrictive hypotheses within the framework of <cit.>, <cit.> and <cit.>. For instance, as compared to the restriction of finiteness on coefficient rings one can develop the constructible Witt theory for `good enough' coefficient rings as defined in <cit.>- this includes ℤ and any noetherian ring of positive characteristics. Another generality offered by the work of Cisinski and Déglise is extension of the theory to a more general base scheme as recalled in Theorem <ref> in terms of the subcategory DM_h,lc(X,R) of locally constructible h-motives of the derived category DM_h(X,R) of h-motives. §.§ Constructible sheaves of Λ-modules We begin by recalling the category to be used for defining constructible Witt theory- the bounded derived category D^b_ctf(X_ét, Λ) of étale sheaves having finite Tor-dimension and constructible cohomology sheaves. For easy reference we recall the small étale site X_ét of a scheme X and the classically studied bounded derived category D^b_ctf(X_ét, Λ). As a category it is the category Ét/X of étale X-schemes, and covering in the site X_ét are surjective families {X_i^' X^'} of morphisms in Ét/X. Thus, cat(X_ét)= Ét/X cov(X_ét)= collection of surjective families of morpisms in Ét/X. Sheaves on X_ét are called étale sheaves on X. Let X be a noetherian scheme and Λ a noetherian ring. A sheaf ℱ of Λ-modules on X_ét is called constructible if there exists a decomposition ⋃ _i=1 ^nX_i of X into finitely many locally closed subsets X_i such that each ℱ|_X_i is locally constant and the stalks of ℱ are finitely generated Λ-modules. Along with constructibility the objects in the category D^b_ctf(X_ét, Λ) have additional finiteness condition, namely that of having finite Tor-dimension. Let ℱ^∙ be a bounded complex of sheaves of Λ-modules on X. We say that ℱ^∙ has finite Tor-dimension if there exist an integer n such that Tor_i(ℱ^∙, M)=0 for any i>n and any constant sheaf of Λ-modules M on X. Let X be a noetherian scheme and Λ be a noetherian ring. Denote by D(X_ét,Λ) the derived category of sheaves of Λ-modules over X_ét. We get the full subcategories D^∗(X_ét,Λ) by taking ∗=+,-,b. The category D^b_ctf(X_ét,Λ)⊂ D(X_ét,Λ) is the full subcategory consisting of bounded complexes of sheaves of Λ-modules with finite Tor-dimension and having constructible cohomology sheaves. The objects in the category D^b_ctf(X_ét,Λ) are complexes that are quasi-isomorphic in D(X_ét, Λ) to bounded complexes whose components are flat and constructible sheaves of Λ-modules <cit.>. §.§ Duality on constructible sheaves As mentioned in the beginning of this section we now use the six-functor formalism for triangulated subcategories DM_h,lc(X,Λ) of locally constructible h-motives of the derived category DM_h(X,Λ) of h-motives from <cit.> to the case of classically studied bounded derived categories D^b_ctf(X_ét,Λ) of étale sheaves having finite Tor-dimension and constructible cohomology. For basic definitions and related material about the derived category of étale motives the original paper <cit.> by Cisinski and Déglise can be consulted. In what follows recall that a scheme is said to be excellent if it has an open affine cover given by excellent rings: Examples of such rings include the ring ℤ of integers, fields, the ring ℤ_p of p-adic integers, finitely generated rings of the form R[x_1,⋯,x_r]/<f_1,⋯,f_n> over an excellent ring R, and localizations of excellent rings. Compared to <cit.> and <cit.> more general six-functor formalism for D^b_ctf(X_ét,Λ) follows from the two results <cit.> restated in the following. Let B be an excellent scheme of dimension ≤ 2 and Λ a noetherian ring of positive characteristic prime to the residue characteristics of B. Let ϕ:S→ B be a regular separated finite type B-scheme and f:X→ S a separated morphism of finite type. The categories D^b_ctf(X_ét,Λ)⊂ D^b(X_ét,Λ) are closed under the six-functors of Grothendieck in D^b(X_ét,Λ). For a ⊗-invertible object T∈ D^b_ctf(S_ét,Λ) the functor D_X(T)=Rℋom (-, f^!(T)): D^b_ctf(X_ét,Λ)^op→ D^b_ctf(X_ét,Λ) is a duality functor on D^b_ctf(X_ét,Λ). In particular, we have triangulated categories with duality (D^b_ctf(B_ét,Λ), Rℋom(-, Λ)) and (D^b_ctf(S_ét,Λ), Rℋom(-, Λ)). Under the assumptions in the theorem the triangulated subcategory DM_h,lc(X,Λ)⊂ DM_h(X,Λ) of locally constructible h-motives is closed under the six-functors of Grothendieck in DM_h(X,Λ) and for a locally constructible ⊗-invertible object U∈ DM_h(S,Λ), the functor D_X(U)=Rℋom (-, f^!(U)): DM_h,lc(X,Λ)^op→ DM_h,lc(X,Λ) is a duality functor on DM_h,lc(X,Λ). Since the equivalence of categories D^b_ctf(X_ét,Λ) DM_h,lc(X,Λ) in <cit.> is compatible with the six functor formalism in the two settings, the proof of the theorem is complete. Notation. With the notations of the above theorem the triangulated category D^b_ctf(X_ét,Λ) with the duality D_X(T) will be denoted by (D^b_ctf(X_ét,Λ), D_X(T)) or (D^b_ctf(X_ét,Λ), f^!T). Under the same assumptions on B, Λ and S as in the theorem above, let f: X→ S and g: Y→ S be separated S-schemes of finite type, and T be a ⊗-invertible object in D^b_ctf(S_ét,Λ). Let h:X→ Y be a morphism of S-schemes. If h is étale, then the pullback morphism h^*: D^b_ctf(Y_ét,Λ) → D^b_ctf(X_ét,Λ) is a morphism of triangulated categories with duality h^*: (D^b_ctf(Y_ét,Λ), g^!T) → (D^b_ctf(X_ét,Λ), f^!T). If the morphism h:X→ Y is proper, then the pushforward morphism h_*:D^b_ctf(X_ét,Λ) → D^b_ctf(Y_ét,Λ) is a morphism of triangulated categories with duality h_*: (D^b_ctf(X_ét,Λ), f^!T) → (D^b_ctf(Y_ét,Λ), g^!T). We explain the details for the pushforwards h_* for proper morphisms of schemes and leave the part on h^* as an exercise. We show that in the following diagram of triangulated categories D_ctf^b(X_ét,Λ)^op[rr]^ D_X(T) [d]^h_* D_ctf^b(X_ét,Λ) [d]^h_* D_ctf^b(Y_ét,Λ)^op[rr]^ D_Y(T) D_ctf^b(Y_ét,Λ) the two compositions are naturally isomorphic by using the relations among the functors in this diagram. We have f=g∘ h. Given an element F∈ D_ctf^b(X_ét,Λ) we have the following natural identifications h_*(D_X(T)(F)) = h_*(Rℋom (F, f^!(T))) ≃ h_*(Rℋom (F, h^!g^!(T))) ≃ Rℋom_X (h_!F, f^!T) ≃ Rℋom_X (h_*F, f^!T) = D_X(T)(h_*(F))) where in second to the last identification we use the fact that h_!=h_* for the proper morphism h. §.§ Localization We know that for a closed immersion i:Z↪ X of schemes the induced exact functor i_*:Shv(Z_ét,Λ)→ Shv(X_ét,Λ) is fully faithful and its essential image is the subcategory of those sheaves whose support is contained in Z. The natural transformation Ri_*=i_*:D(Z_ét,Λ)→ D(X_ét,Λ) of the derived categories induces an equivalence of D(Z_ét,Λ) with D_Z(X_ét,Λ)- the strictly full saturated subcategory of D(X_ét,Λ) consisting of complexes whose cohomology sheaves are supported in Z. Let j:U=X-Z↪ X be the complementary open immersion and j^*:Shv(X_ét,Λ)→ Shv(U_ét,Λ) the induced map of sheaves. Then we have an exact sequence of triangulated categories D(Z_ét,Λ) D(X_ét,Λ) D(U_ét,Λ) in which the D(Z_ét,Λ) can be identified with D_Z(X_ét,Λ). Let D^b_ctf,Z(X_ét,Λ)⊂ D_ctf^b(X_ét,Λ) be the strictly full saturated triangulated subcategory consisting of bounded complexes having constructible and finite Tor-dimension whose cohomology sheaves are supported in Z. Under the assumptions in theorem <ref> for the scheme X and the ring Λ the six-functor formalism for étale motives in <cit.> implies that the natural equivalence of the categories D(Z_ét,Λ) with D_Z(X_ét,Λ) induces an equivalence of D_ctf^b(Z_ét,Λ) with D^b_ctf,Z(X_ét,Λ). With the same assumptions as in Theorem <ref> for the schemes B, S, X, the ring Λ and the map f:X→ S, let i:Z↪ X be a closed immersion and j:U=X-Z↪ X the complementary open immersion. The induced functors i_*:D_ctf^b(Z_ét,Λ)→ D_ctf^b(X_ét,Λ) and j^*:D_ctf^b(X_ét,Λ) → D_ctf^b(U_ét,Λ) are natural transformations of triangulated categories with duality (a.) i_*:(D_ctf^b(Z_ét,Λ), D_Z(T)) → (D_ctf^b(X_ét,Λ), D_Z(X)) (b.) j^*:(D_ctf^b(X_ét,Λ)), D_X(T) → (D_ctf^b(U_ét,Λ), D_U(T)) for every ⊗-invertible object T∈ D^b_ctf(S_ét,Λ). This is a special case of Proposition <ref>. Under the equivalence of D_ctf^b(Z_ét,Λ) with D^b_ctf,Z(X_ét,Λ) we will consider D^b_ctf,Z(X_ét,Λ) as a triangulated category with duality and denote the duality by D_Z(T) for a ⊗-invertible object T∈ D^b_ctf(S_ét,Λ). The localization property of the categories D^b_ctf((-)_ét,Λ) is the following exact sequence of triangulated categories with the notations of this subsection. Let B be an excellent scheme of dimension ≤ 2 and Λ a noetherian ring of positive characteristic prime to the residue characteristics of B. Let ϕ:S→ B be a regular separated finite type B-scheme and f:X→ S a separated morphism of finite type. The exact sequence <ref> of triangulated categories induces a localization sequence of triangulated categories with duality (D^b_ctf,Z(X_ét,Λ), D_Z(T)) (D^b_ctf(X_ét,Λ), D_X(T)) D^b_ctf(U_ét,Λ), D_U(T)) for a ⊗-invertible object T∈ D^b_ctf(S_ét,Λ). Using Grothendieck's six-functor formalism for DM_h,lc((-)_ét,Λ) in <cit.> and the identification of D^b_ctf((-)_ét,Λ) with DM_h,lc((-)_ét,Λ) in <cit.> we observe that the morphisms of triangulated categories in <ref> define a localizing sequence. In view of the lemma <ref> the proof of the theorem is complete. § WITT THEORY OF CONSTRUCTIBLE SHEAVES In this section we now define the constructible Witt groups of étale sheaves of Λ-modules on a noetherian scheme X as Balmer's Witt groups of the triangulated category with duality D^b_ctf(X_ét,Λ) discussed in the section <ref>. We describe some of their basic properties. We will work under the assumptions as in Theorem <ref> for the schemes B, S, X, the ring Λ and the map f:X→ S. The category (D^b_ctf(X_ét,Λ),D_X(T)) is a triangulated category with duality for a ⊗-invertible object T∈ D^b_ctf(S_ét,Λ). The constructible Witt groups of sheaves of Λ-modules on X are Witt groups of this triangulated category with duality. We will denote these by W^n_c(X_ét, Λ, T). In notations we may suppress the mention of the ⊗-invertible object T and use the notation W^n_c(X_ét, Λ). §.§ Functoriality for constructible Witt groups We can directly use Proposition <ref> to get the functoriality of constructible Witt groups. Consider the same assumptions on B, X, Λ and S as in Proposition <ref> and T a ⊗-invertible object in D^b_ctf(S_ét,Λ). For an étale morphism h:X→ Y of S-schemes, we get a morphism of triangulated categories with duality h^*: (D^b_ctf(Y_ét,Λ), g^!T) → (D^b_ctf(X_ét,Λ), f^!T). Then h^∗ induces a group homomorphism of constructible Witt groups W^n(h^∗):W^n_c(Y_ét,Λ)→ W^n_c(X_ét,Λ) for all n∈ℤ. Also, when h: X→ Y is a proper morphism of S-schemes, we get a morphism of triangulated categories with duality h_∗: (D^b_ctf(X_ét,Λ), g^!T) → (D^b_ctf(Y_ét,Λ), f^!T). Then h_∗ induces a group homomorphism of constructible Witt groups W^n(h_∗):W^n_c(X_ét,Λ)→ W^n_c(Y_ét,Λ) for all n∈ℤ. §.§ Localization sequence for constructible Witt groups Now we describe the localization sequence for the constructible Witt theory. Recall that a localization of triangulated categories J K L is an exact sequence of triangulated categories, that is, L is a localization of K with respect to a saturated class of morphisms S, and the triangulated category J is the full subcategory of K consisting of objects X ∈ K for which q(X) = 0 in L. If (K, #,ω) is a triangulated category with duality and the class of morphisms S is compatible with duality on K i.e. #(S)=S, then J endowed with restriction of # and S^-1K endowed with the localization of # are triangulated categories with duality and the exact natural transformations j and q are morphisms of triangulated categories with duality. Such a localization sequence of triangulated categories with duality gives a 12-term exact sequence of corresponding Witt groups. The localization sequence for the constructible Witt theory is a consequence of the localization sequence of triangulated categories with duality in theorem <ref>. Given a closed immersion i:Z↪ X with the complementry open immersion j:U=X-Z ↪ X, theorem <ref> gives the localization of triangulated categories with duality (D^b_ctf(Z_ét,Λ), D_Z(T)) (D^b_ctf(X_ét,Λ), D_X(T)) D^b_ctf(U_ét,Λ), D_U(T)). The corresponding 12-term exact sequence of constructible Witt groups can be written as ⋯→ W_c^n-1(U_ét,Λ) W^n_c(Z_ét,Λ) W^n_c(X_ét,Λ) W^n_c(U_ét,Λ) W^n+1_c(Z_ét,Λ)→⋯ using the 4-periodicity of triangular Witt groups. §.§ Homotopy Invariance With the techniques used in this paper we are not in position to prove homotopy invariance of contructible Witt theory simply because duality is not compatible with the six-functor formalism for D^b_ctf((-)_ét,Λ) in the same generality. To be specific, under the running assumptions in this section, for a vector bundle p:V→ X, the pullback functor p^*:D^b_ctf(X_et,Λ) → D^b_ctf(V_et,Λ) is fully faithful since the unit of adjunction 1→ p_*p^* is an isomorphism but p_* is not compatible with dualities. It is instinctive to expect homotopy invariance for this theory, but then one should develop the theory in a more flexible setting for six-functor formalism. § CONSTRUCTIBLE WITT THEORY OF ℝ In this section we will identify the constructible Witt theory of étale sheaves of Λ-modules on ℝ, for a ring Λ of finite characteristic not equal to 2, as a ℤ/2ℤ-equivariant Witt theory of Λ. Also for a real projective variety f:X→ℝ we construct natural homomorphisms W^i(f_*): W^i_c(X, Λ)→ W^i_c(ℝ, Λ)=W^i_lf(Λ[ℤ/2ℤ]) which define an algebraic version of signature for X. Let X= k be the affine scheme defined by a field k. Let k^sep be a separable closure of k and let Gal(k^sep/k) be the Galois group of k^sep/k equipped with the canonical structure of a profinite group. For each k-scheme X^' we denote by X^'(k^sep) the set of k^sep-valued points on X^'/k, i.e. the set of k-morphisms k^sep→ X^'. A k^sep-valued point of X^' corresponds uniquely to a point x^'∈ X^' together with a k-homomorphism k(x^')→ k^sep. The following equivalence of the category of étale sheaves of Λ-modules on k with the category of Λ-modules with continuous Gal(k^sep/k)-action for the profinite topology on the Galois group is well-known. For the scheme k the functor defined by taking stalks at the geometric point of k given by a choice k⊂ k^sep of the separable closure ℱ↦ _k' ℱ( k^'), k⊂ k'⊂ k^sep and k' a finite extension of k is an equivalence between the category of sheaves of Λ-modules on ( k)_ét and the category of continuous Λ[G]-modules for a group ring Λ[G]. Thus, in the case of the field ℝ of real numbers we have an equivalence ϕ: Shv((ℝ)_ét, Λ)→ Mod(Λ[ℤ/2ℤ]) of the category of étale sheaves of Λ-modules on ℝ with the category of all ℤ/2ℤ-equivariant Λ-modules. See <cit.> for details. §.§ Flat and constructible sheaves on (ℝ)_ét Let ℱ be a flat and constructible sheaf of Λ-modules on (ℝ)_ét. As ℱ is a flat sheaf, the stalk of ℱ at the single geomtric point in ℝ given by ℂ is a flat Λ-module. Also ℱ being a constructible sheaf of Λ-modules, it is locally constant and its stalk ℱ_x is finitely generated Λ-module. Thus, for a flat and constructible sheaf ℱ on (ℝ)_ét the stalk ℱ_x is a flat and finitely generated Λ-module. In this paper the coefficient rings are assumed to be noetherian and therefore for a flat and constructible sheaf ℱ the stalk ℱ_x being flat and finitely generated is a finitely generated projective Λ-module with a G action. Under the assumption that the characteristic of Λ is different from 2 it so happens that the category of finitely generated projective Λ-modules with ℤ/2ℤ-action is equivalent to the category of finitely generated projective Λ[ℤ/2ℤ]-modules. For a lack of reference known to us we will recall this useful fact in terms of equivariant vector bundles on schemes. For an affine G-scheme X, under the assumption that |G| is invertible in Γ(X, O_X), every short exact sequence of equivariant vector bundles splits equivariantly. Given a short exact sequence 0→ℰℒℳ→ 0 of G-equivariant vector bundles on X, choose a (non-equivariant) splitting s:ℳ→ℒ and observe that the morphism s̅:ℳ→ℒ, a↦1/|G|∑_g∈ Gg· s(g^-1· a), a∈ℳ(U), U⊂ X open of vector bundles is a G-equivariant splitting of ϕ. For an equivariant affine G-scheme ( R,τ), let ρ__R_τ (or simply ρ__R) be the skew group-ring R_τ G viewed as the twisted regular representation of G over R. The G-action on ρ__R_τ is given by g'·(Σ_g∈ Gx_g· e_g)= Σ_g∈ Gτ_g'(x_g)· e_g'g, where {e_g:g∈ G} is the standard basis of ρ__R_τ For an affine G-scheme R, under the assumption that |G| is invertible in R, every G-equivariant finitely generated projective R-module is a summand in a direct sum of regular representations. Let τ: G→Aut_Sch/ℤ( R) denote the G-action on R. Given an equivariant finitely generated projective R-module P choose an epimorphism α: R^n→ P. The free rank 1 module R is a G-equivariant summand in ρ__R_τ via the inclusion R⟶ρ__R_τ, x↦∑_g∈ Gτ(x)e_g and hence R^n↪ R^n⊗_R ρ__R_τ = ρ__R_τ^n is a G-equivariant direct summand. The induced surjective map α:ρ__R_τ^n → P (x, ∑_g∈ Gx_g· e_g )↦∑_g∈ Gx_g·α(x) is G-equivariant and we have an exact sequence of G-equivariant finitely generated projective R-modules ker αρ__R_τ^n P. Choosing an equivariant splitting by Lemma <ref> we see that P is a direct summand of ρ_R_τ^n. For the trivial action of the group ℤ/2ℤ on Λ the category of ℤ/2ℤ-equivariant finitely generated projective Λ-modules is equivalent to the category of finitely generated projective Λ[ℤ/2ℤ]-modules. Thus, for a constructible and flat sheaf ℱ of Λ-modules on (ℝ)_ét, the stalk ℱ_x is a finitely generated projective Λ[ℤ/2ℤ]-module. Follows from the Lemma <ref> and discussion in the beginning of this subsection in view of the assumption that the characteristic of Λ is different from 2. §.§ Equivalence of D^b_ctf((ℝ)_ét,Λ) with D^b(Proj(Λ[G])) and Constructible Witt groups of ℝ Using the description of stalks of flat and constructible sheaves on (ℝ)_ét as finitely generated projective Λ[ℤ/2ℤ]-modules in Corollary <ref> we now describe the constructible Witt theory of Λ-modules on ℝ as a ℤ/2ℤ-equivariant Witt theory of Λ. The equivalence ϕ: Shv((ℝ)_ét, Λ)→ Mod(Λ[ℤ/2ℤ]) in Theorem <ref> is an exact equivalence of categories and induces an exact equivalence on the associated categories of chain complexes ϕ: Ch(Shv((ℝ)_ét, Λ))→ Ch(Mod(Λ[ℤ/2ℤ])). Let Φ: D((ℝ)_ét, Λ)→ D(Mod(Λ[ℤ/2ℤ])) be the induced functor on the derived categories. The category D^b_ctf((ℝ)_ét,Λ) of bounded complexes sheaves of Λ-modules having Tor-finite dimension and constructible cohomology sheaves can now be identified with the more familiar bounded derived category of D^b(Proj(Λ[ℤ/2ℤ])) of finitely generated projective Λ[ℤ/2ℤ]-modules giving us the the identification of the constructible Witt theory of ℝ with a Witt theory of finitely generated projective Λ[ℤ/2ℤ]-modules. Let Λ a ring of finite characteristic not equal to 2. Then the functor in (<ref>) induces an equivalence of triangulated categories with duality Φ: D^b_ctf((ℝ)_ét,Λ), ℋom(-,Λ))→ (D^b(Proj(Λ[ℤ/2ℤ])), Hom_Λ[ℤ/2ℤ](-, Λ)) and induces an isomorphism of constructible Witt theory of ℝ with an equivariant Witt theory of finitely generated projective Λ-modules for the action of the group ℤ/2ℤ (the absolute Galois group of ℝ). The category D^b_ctf((ℝ)_ét,Λ) ⊂ D((ℝ)_ét, Λ) is the full triangulated subcategory consisting of complexes that are quasi-isomorphic in D((ℝ)_ét, Λ) to bounded complexes whose components are flat and constructible sheaves of Λ-modules. By Corollary <ref> we have the restriction Φ:D^b_ctf((ℝ)_ét,Λ) → D^b(Proj(Λ[ℤ/2ℤ])) of Φ is functor into the bounded derived category D^b(Proj(Λ[ℤ/2ℤ])) of finitely generated projective Λ[ℤ/2ℤ]-modules. Recall that a quasi-inverse ψ: Mod(Λ[ℤ/2ℤ])→ Shv((ℝ)_ét, Λ) of the functor ϕ: Shv((ℝ)_ét, Λ)→ Mod(Λ[ℤ/2ℤ]) is defined by associating to a Λ[ℤ/2ℤ]-module M it's the ℤ/2ℤ-fixed points M^ℤ/2ℤ with the base scheme ℝ and the module M to the scheme ℂ. It is an exact functor and induces a quasi-inverse Ψ : D^b(Proj(Λ , ℤ/2ℤ)) → D^b_ctf((ℝ)_ét,Λ) of Φ. The diagram of triangulated categories D_ctf^b((ℝ)_ét,Λ)^op[rrr]^ ℋom(-, Λ) [d]^Φ D_ctf^b((ℝ)_ét,Λ)[d]^Φ D^b(Proj(Λ[ℤ/2ℤ]))^op[rrr]^ Hom_Λ[ℤ/2ℤ](-, Λ) D^b(Proj(Λ[ℤ/2ℤ])) commutes since the sheaf Λ defining the duality ℋom(-, Λ) is the constant sheaf. Therefore the equivalence Φ is an equivalence of triangulated categories with duality. This completes the proof. §.§ Signatures for proper real algebraic varieties For a Noetherian ring Λ of positive characteristic not equal to 2 the Theorem <ref> provides an equivalence D^b_ctf((ℝ)_ét, Λ) D^b(Proj(Λ [ℤ/2ℤ])) of the derived category of bounded complexes sheaves of Λ-modules having Tor-finite dimension and constructible cohomology sheaves with the bounded derived category of finitely generated projective Λ[ℤ/2ℤ]-modules. This equivalence allows us to identify the constructible Witt theory of ℝ with a ℤ/2ℤ-equivariant Witt theory of finitely generated projective Λ-modules, and the algebraic analog of signature defined in <cit.> is given by the following. Let X be a projective real algebraic variety with the structure morphism f:X→ℝ and Λ a ring of finite characteristic not equal to 2. Then we have the induced proper pushforward for constructible Witt theory W^i(f_*): W^i_c(X, Λ)→ W^i_c(ℝ, Λ)=W^i_lf(Λ[ℤ/2ℤ]) where W^i_lf(Λ, ℤ/2ℤ) denotes a ℤ/2ℤ-equivariant Witt theory of finitely generated projective Λ-modules. The proof follows from the Theorem <ref> and the Proposition <ref> for the duality defined by the ⊗-invertible object Λ.
http://arxiv.org/abs/2307.02232v1
20230705122254
Comparative Analysis of THz Signal Emission from SiO$_2$/CoFeB/Metal Heterostructures: Wideband and High-Frequency THz Signal Advantage of PtBi-based Emitter
[ "Tristan Joachim Winkel", "Tahereh Sadat Parvini", "Finn-Frederik Stiewe", "Jakob Walowski", "Farshad Moradi", "Markus Münzenberg" ]
physics.optics
[ "physics.optics" ]
tristan.winkel@uni-greifswald.de Institut für Physik, Universität Greifswald, Greifswald, Germany tahereh.parvini@uni-greifswald.de Institut für Physik, Universität Greifswald, Greifswald, Germany Institut für Physik, Universität Greifswald, Greifswald, Germany Institut für Physik, Universität Greifswald, Greifswald, Germany ICELab, Aarhus University, Denmark Institut für Physik, Universität Greifswald, Greifswald, Germany Spintronic THz emitters have attracted much attention due to their desirable properties, such as affordability, ultra-wideband capability, high efficiency, and tunable polarization. In this study, we investigate the characteristics of THz signals, including their frequency, bandwidth, and amplitude, emitted from a series of heterostructures with ferromagnetic (FM) and nonmagnetic (NM) materials. The FM layer consists of a wedge-shaped CoFeB layer with a thickness of 0 to 5 nm, while the NM materials include various metals such as Pt, Au, W, Ru, Pt_%92Bi_%8, and Ag_%90Bi_%10 alloys. Our experiments show that the emitter with Pt-NM layer has the highest amplitude of the emitted THz signal. However, the PtBi-based emitter exhibits a higher central THz peak and wider bandwidth, making it a promising candidate for broadband THz emitters. These results pave the way for further exploration of the specific compositions of Pt_1-xBi_x for THz emitter design, especially with the goal of generating higher frequency and wider bandwidth THz signals. These advances hold significant potential for applications in various fields such as high-resolution imaging, spectroscopy, communications, medical diagnostics, and more. Comparative Analysis of THz Signal Emission from SiO_2/CoFeB/Metal Heterostructures: Wideband and High-Frequency THz Signal Advantage of PtBi-based Emitter Markus Münzenberg August 1, 2023 =========================================================================================================================================================== § INTRODUCTION The Hall effect, discovered in 1879 <cit.>, is a crucial technique in modern measurement technology, allowing for the determination of magnetic field strength and charge carrier type, density, and mobility. The spin Hall effect (SHE), which is the quantum mechanical analogue of the Magnus effect, was predicted in 1971 <cit.> and later experimentally confirmed in the early 2000s <cit.>. The SHE arises from the interplay between the electron's spin and motion in the presence of spin-orbit coupling, and has significant implications for spintronics and related technologies such as the generation of terahertz (THz) radiation, high-speed data processing, and quantum information processing <cit.>. THz waves have potential applications in materials science, biology, and medicine, and further research could broaden their use in spectroscopy, imaging, and communication <cit.>. Typical THz emitters are made up of ferromagnetic (FM)/nonmagnetic (NM) heterostructures where the FM layer is pumped using a femtosecond laser to generate a spin-dependent excitation of electrons. The inverse spin Hall effect (ISHE) in the NM layer converts the longitudinal spin-polarized current (𝐉_s) into a transient transverse current (𝐉_c) according to 𝐉_c=θ_SH𝐉_s×𝐌/|𝐌|, resulting in the emission of terahertz radiation <cit.>. θ_SH is the spin Hall angle that characterizes the efficiency of the spin Hall effect and can be enhanced through various means such as selecting materials with large spin-orbit coupling, optimizing material properties including crystal structure, impurity level, and thickness, as well as controlling experimental conditions. By precisely determining the materials and thickness in THz emitters, it becomes possible to optimize the amplitude, frequency, and bandwidth of the THz field. In this study, we fabricated a series of FM/NM heterostructures to conduct a comparative study on the characteristics of the emitted THz signal. The FM layer consists of a wedge layer of Co_40Fe_40B_20 with a thickness ranging from 0 to 5 nm. Meanwhile, the NM layer comprised a diverse range of materials, including Pt (2, 3, and 4 nm), W (2 nm), Au (2 nm), Ru (4 nm), as well as the alloys Pt_%92Bi_%8 (2 nm) and Ag_%90Bi_%10 (2 nm). Our investigation focused on evaluating the amplitude, central frequency, and bandwidth of the THz signal emitted from all samples. Furthermore, we analyzed the determining factors that influenced our observations. § MATERIALS AND METHODS In this study, we adopted a conventional approach of employing the substrate/FM/NM layer configuration for the fabrication of THz emitters. Initially, a magnetic layer consisting of Co_40Fe_40B_20 (CoFeB) was deposited onto a fused silica substrate using magnetron-sputtering technique. To explore the influence of FM-layer thickness on the characteristics of the emitted THz signal, we designed the layer in a wedge shape, allowing for a variable thickness ranging from 0 to 5 nm. To investigate the contribution of the NM-layer material to the terahertz signal, we examined various pure heavy metals such as Pt, W, Au, Ru , as well as alloys Pt_%92Bi_%8 and Ag_%90Bi_%10 as NM-layers. Furthermore, We fabricated a series of heterostructures with varying thicknesses of pure Pt layers to study the impact of Pt layer thickness on the emitted THz signal. The NM-layers were deposited on the CoFeB using electron beam (e-beam) evaporation. The alloys are formed through a controlled combined deposition of both constituent materials. The ratio between the alloyed materials is determined by adjusting the deposition rates, monitored using quartz crystal monitoring. The required ratio of the deposition rates (v_a/v_b) can be calculated using the following formula v_a/v_b= M_a×ρ_b/M_b×ρ_a.r, where ρ represents the density, M denotes the molar mass, and r signifies the desired molar ratio of the alloyed materials. Due to the high sensitivity of spin currents to interface contaminations or oxidation, we implemented strict precautions during sample fabrication. This involved conducting the fabrication process and transferring the samples between chambers under high vacuum conditions, ensuring the preservation of interface integrity without any modifications. § RESULTS AND DISCUSSION Our previous publication <cit.> have provided a detailed description of our experimental setup for generating and detecting THz signals from spintronic emitters. Fig. <ref> provides a simple schematic of our THz emitters and the corresponding mechanism for THz generation. We generate THz radiation in heterostructures by employing femtosecond laser pulses with central wavelengths of ∼810 nm, pulse durations of 40 fs, and repetition rates of 80 MHz. The detection of THz signals is carried out using commercial low-temperature grown-GaAs (LT-GaAs) Auston switches with a bandwidth greater than 4 THz. To obtain a temporal THz spectrum at each position of the sample, we utilized a two dimensional scanning technique with motorized stages. A comprehensive description of the methodology can be found in <cit.>. Additionally, the samples were subjected to scanning using a magneto-optic Kerr effect setup to investigate the relationship between the position and the thickness of the CoFeB layer. The obtained data was fitted using an error function and combined with the data acquired from the THz setup. This integration allowed us to determine the THz emission corresponding to each sample as a function of the CoFeB layer thickness. Under the illumination of ultrafast femtosecond (fs) laser pulses, spin-polarized electrons (j_s) within the FM layer get excited and diffuse into the NM layer, forming a spin current. The inverse spin Hall effect (ISHE) comes into play, converting this spin current into a transient transverse charge current (j_c) within the NM layer, ultimately leading to the emission of terahertz (THz) radiation. In the plane-wave approximation, the amplitude of the electric field of the emitted THz signal reads <cit.> E_THz=AF/d_NM+d_FM.j_s^0t_FM/NMλ_NMtanhd_NM/λ_NM.θ_SH.eZ(ω), where the terms respectively represent pump-pulse absorption, spin-current generation, spin-to-charge current conversion, and charge-current-to-electric-field conversion. The parameters A, j_s^0, t_FM/NM, λ_NM, and θ_SH represent the absorbed fraction of the incident pump-pulse fluence (F), the generated spin-current density per pump-pulse excitation density, the interfacial spin-current transmission amplitude between the FM and NM layers, the spin-current relaxation length in the NM layer, and the spin Hall angle specific to the NM material, respectively. The frequency-dependent impedance of the emitter, denoted as Z(ω), is given by Z(ω)=Z_0/n_1+n_2+Z_0G, where Z_0≈377Ω represents the free-space impedance, and n_1(ω) and n_2(ω) are the refractive indices of air and the substrate, respectively, and G(ω) represents the THz sheet conductance. Consequently, by meticulously controlling the parameters in Eq. <ref>, it is possible to achieve the desired signal. Subsequently, we proceed to investigate the role of these parameters in the emitted THz signal from our fabricated emitters. Dependence of THz Signal Amplitude, Central Frequency, and Bandwidth on Heterostructure Thickness and NM Materials: Fig. <ref> illustrates the emitted THz signal as a function of delay time and laser spot position for distinct samples. Accordingly, irrespective of the type of metal layers, no THz emission is detected at the thin end of the wedge, where the CoFeB thickness is 0 nm. As the thickness of the CoFeB layer increases, the amplitude of the emitted THz signal exhibits growth, peaking at around 2 nm of CoFeB thickness. It is noteworthy that beyond this critical point, further increases in CoFeB thickness do not have a significant impact on the amplitude of the THz signal. This behavior can be attributed to the progressive diffusion of an increasing number of electrons into the NM layer as the CoFeB layer thickness increases up to 2 nm. The influx of electrons enhances the induced spin current, thereby generating stronger THz radiation. However, beyond a 2 nm thickness of the CoFeB layer, the presence of heightened structural defects, electron scattering, and resistance within the magnetic layer hinders further amplification of the THz signal amplitude. As a result, the THz signal reaches a plateau beyond this critical thickness. Furthermore, the results indicate that the maximum THz amplitude is attained when utilizing a 2 nm Pt layer as the heavy metal component of the emitter. The decrease in THz amplitude with increasing Pt thickness can be attributed to the absorption and attenuation of THz waves by the Pt layer, which will be discussed in the subsequent section. To gain deeper insights into the fabricated THz emitters, we analyzed the correlation between the amplitude of the emitted THz field at 1 THz and the varying thicknesses of the CoFeB layer in stacks with specified NM layers, see Fig. <ref> (a) and (b). The THz emitters composed of pure Pt layers demonstrate the highest THz amplitude among the emitters. Specifically, the stack with a 2 nm Pt layer exhibits a THz signal amplitude that is twice as high as the stack with a 2 nm PtBi layer. However, the emitter with PtBi, as shown in Figs. <ref> (c) and (d), exhibits a wider bandwidth of approximately 0.35 THz compared to other stacks, along with a higher central frequency of the THz signal. Interestingly, it demonstrates a significant central frequency shift of approximately 0.3 THz when compared to the emitter with 2 nm Pt. These findings highlight the crucial role of the NM layer material in influencing THz characteristics and suggest the potential advantages of the PtBi stack for applications requiring broader bandwidth and higher frequencies, despite a lower THz amplitude. When evaluating the trade-off between THz signal strength and bandwidth, it is crucial to consider the specific requirements of the intended application, as different applications may prioritize either a higher THz signal strength or a broader bandwidth based on their unique needs and constraints. This observation suggests potential advantages in applications that benefit from higher frequency THz signals, including high-resolution imaging, spectroscopy, communications, medical diagnostics, and etc. The values of the saturated THz amplitudes and THz peak position measured for different emitters are presented in Table <ref>. Exploring Conductance of NM Layers through Terahertz Time-Domain Spectroscopy (THz-TDS): The conductance of the NM layer, as indicated by Eq.<ref>, is a significant factor affecting the THz signal amplitude. To determine the conductance value of each NM layer, we direct the THz transient signal onto both the bare substrate and the substrate coated with a thin metal film. By measuring the THz transmission through the sample (T_s(ω)) relative to that through the bare substrate (T_sub(ω)), we can infer the conductance of the metal at THz frequencies using Tinkham’s formulas <cit.>: T_s(ω)/T_sub(ω)=E_s(ω)/E_sub(ω)=1+n_sub/1+n_sub+Z_0G(ω), here, E_s(ω) and E_sub(ω) are the Fourier Transforms of the time-dependent THz electric field waveforms transmitted through the sample and substrate, respectively. The refractive index of the substrate is obtained using the formula n_sub=cΔϕ/d_subω+1, where Δϕ is the phase difference of the pulsed THz radiation incident upon the substrate (E_i) and transmitted through the substrate (E_sub). For a SiO_2 substrate with a thickness of d_sub=500μm, applying this method yields n_sub≈1.98. Fig. <ref>(a) shows the real and imaginary parts of the optical conductivity for different NM thin films, obtained through the formula G(ω) = σ(ω) · d_NM. The observed inequality |σ_Pt(2)|<|σ_Pt(3)|<|σ_Pt(4)| demonstrates a positive correlation between film thickness and conductivity. Increasing the film thickness improves conductivity due to factors such as an increased number of charge carriers, reduced interfacial scattering, improved crystallinity, and decreased surface roughness. The Drude model <cit.> was utilized to fit the conductivity of the Pt layers, enabling the determination of plasma frequency (ω_p=√(Ne^2/mε_0), where N, e, m, ε_0 are the carrier electron density, electron charge, electron effective mass, and vacuum permittivity, respectively) and scattering time (τ). The findings demonstrate a clear trend of increasing ω_p with film thickness, indicating a proportional rise in carrier density (ω_p,Pt(2) = 0.19 × 10^16 Hz, ω_p,Pt(3) = 0.3 × 10^16 Hz, ω_p,Pt(4) = 0.36 × 10^16 Hz). Additionally, as the film thickness increases, the scattering time decreases (τ_Pt(2)=52fs, τ_Pt(3)=32fs, τ_Pt(4)=23fs), resulting in a decrease in electron mobility (μ∝τ). The impedance analysis in Fig. <ref>(b) confirms that thicker Pt layers exhibit lower impedance compared to the thinner layers, with a clear trend of |Z_Pt(2)|>|Z_Pt(3)|>|Z_Pt(4)|. This finding directly corresponds to a reduction in the emitted THz signal from thicker NM layer, as supported by Eq. <ref> and proved in Fig. <ref>. W exhibits greater impedance compared to other metals, but one contributing factor to its smaller THz amplitude is its relatively diminished spin Hall angle. Remarkably, PtBi (2nm) exhibits a real component of conductivity that is comparable to that of Pt (2nm). However, what sets it apart is its surpassing imaginary component, which not only exceeds that of Pt but also bears a negative value (Im[σ_PtBi]=-aω, where a is a constant). This can not be explained by simple Drude model <cit.> and suggests a distinctive behavior in PtBi that warrants deeper exploration and investigation. Experimental evidence has demonstrated that the spin-charge conversion efficiency of PtBi is twice that of pure Pt <cit.>. However, due to the influence of other factors such as reduced impedance or interfacial spin-current transmission, the THz signal amplitude of PtBi is comparatively smaller than that of pure Pt. Here, our main focus was to perform a comparative analysis of THz emission from emitters fabricated with different materials. The investigation of emitters with distinct compositions of Pt_1-xBi_x will be addressed in a separate study. § CONCLUSION To optimize spintronic THz emitters performance, we conducted a comparative study on SiO_2/CoFeB/NM heterostructures, where the CoFeB layer thickness varied from 0 to 5nm, and different heavy metals (Pt, W, Au) and alloys (Pt_%92Bi_%8 and Ag_%90Bi_%10) served as the NM layer. Our investigation revealed a critical threshold at 2nm thickness for the CoFeB layer, beyond which the emitted THz signal amplitude saturated. Furthermore, the heterostructure with 2nm CoFeB and 2nm Pt exhibited the highest THz signal amplitude. Surprisingly, despite the Pt_%92Bi_%8 (2nm) emitter exhibiting only half the THz amplitude compared to the Pt (2nm) emitter, it demonstrated a higher central THz frequency and the largest THz bandwidth among all the emitters investigated in our study. This study provide a solid foundation for future studies on different compositions of Pt_1-xBi_x alloy aimed at achieving even higher wideband and frequency capabilities in THz emitters. To gain a deeper understanding of the behavior exhibited by the different emitters, we conducted time-domain THz spectroscopy to analyze their conductivity and impedance characteristics. Our findings demonstrate that thicker NM layers exhibit higher electron concentration and lower mobility, resulting in lower impedance and subsequently lower THz amplitude. The anomalous behavior of PtBi, characterized by a negative imaginary part of conductivity according to our applied model, highlights the need for in-depth investigations and potential modifications of the model to better comprehend and explain the unique properties exhibited by this material. §.§ Acknowledgement This project has received funding from the European Union’s Horizon 2020 research and innovation program under grant agreement No 899559 (SpinAge). §.§ Availability of data The data that supports the findings of this study are available within the article.
http://arxiv.org/abs/2307.00834v1
20230703081859
Injectivity of Multi-window Gabor Phase Retrieval
[ "Palina Salanevich" ]
cs.IT
[ "cs.IT", "math.FA", "math.IT" ]
Greedy Selection for Heterogeneous Sensors Kaushani Majumder, Student Member, IEEE, Sibi Raj B. Pillai, Satish Mulleti, Member, IEEE K. Majumder, S. R. B. Pillai, and S. Mulleti are with the Department of Electrical Engineering, Indian Institute of Technology Bombay, Mumbai, 400076, India. Emails: kaushanim@iitb.ac.in, bsrajs@gmail.com, mulleti.satish@gmail.com August 1, 2023 ================================================================================================================================================================================================================================================================================================================================================ In many signal processing problems arising in practical applications, we wish to reconstruct an unknown signal from its phaseless measurements with respect to a frame. This inverse problem is known as the phase retrieval problem. For each particular application, the set of relevant measurement frames is determined by the problem at hand, which motivates the study of phase retrieval for structured, application-relevant frames. In this paper, we focus on one class of such frames that appear naturally in diffraction imaging, ptychography, and audio processing, namely, multi-window Gabor frames. We study the question of injectivity of the phase retrieval problem with these measurement frames in the finite-dimensional setup and propose an explicit construction of an infinite family of phase retrievable multi-window Gabor frames. We show that phase retrievability for the constructed frames can be achieved with a much smaller number of phaseless measurements compared to the previous results for this type of measurement frames. Additionally, we show that the sufficient for reconstruction number of phaseless measurements depends on the dimension of the signal space, and not on the ambient dimension of the problem. § INTRODUCTION Phase retrieval is the non-convex problem of signal reconstruction from the intensities of its (linear) measurements. It is motivated by a number of real-world applications within science and engineering. Among these applications are diffraction imaging <cit.> and ptychography <cit.>, where the phases of the frame coefficients are lost in the measurement process; as well as audio processing <cit.>, where phases may be too noisy to use them for reconstruction. In the finite-dimensional case, the phase retrieval problem is formulated as follows. Let Φ = {φ_j}_j = 1^N ⊂ℂ^M be a frame, that is, a (possibly over-complete) spanning set of ℂ^M. We consider the phaseless measurement map 𝒜_Φ:ℂ^M →ℝ^N defined by 𝒜_Φ(x) = {|⟨ x, φ_j⟩|^2}_j = 1^N. The aim of the phase retrieval problem is to recover an unknown vector x∈ℂ^M from its phaseless measurements b = 𝒜_Φ(x). Since 𝒜_Φ(x) = 𝒜_Φ(e^iθ x) for any θ∈ [0, 2π), the initial signal x can be reconstructed up to a global phase factor at best. To factor out this ambiguity, we identify each x∈ℂ^M with its up-to-a-global-phase equivalence class [x] = {e^iθ x,  θ∈ [0, 2π)} and consider the measurement map 𝒜_Φ to be defined on the set of equivalence classes ℂ^M /_∼. Not for every frame Φ it is possible to uniquely reconstruct a signal x from 𝒜_Φ(x). The frames with injective associated phaseless measurement maps are called phase retrievable. An important research directions in phase retrieval is to identify and describe classes of phase retrievable frames, see, e.g. <cit.>. At the same time, in practical applications, measurement frames are often required to have a prescribed structure that is determined by the (physical) model behind the problem. For instance, measurement frames arising in diffraction imaging <cit.>, ptychography <cit.>, and audio processing <cit.> have a common structure of a (multi-window) Gabor frame defined below. In this paper, we aim to address the following questions. How to construct phase retrievable multi-window Gabor frames of small cardinality? Our findings also provide a bound on the number of the phaseleless measurements with respect to a multi-window Gabor frame that is sufficient for reconstruction. Let G = {g_r}_r=1^R⊂ℂ^M be a set of windows and Λ⊂ℤ_M×ℤ_M. We define the multi-window Gabor frame as the set of vectors (G, Λ) = {π (λ)g_r}_λ∈Λ, r∈{1,… R}, where * π(k,ℓ) = M_ℓT_k is a time-frequency shift operator; * T_k x = (x(m-k))_m∈ℤ_M is a translation operator; * M_ℓ x = (e^2π i ℓ m/Mx(m))_m∈ℤ_M is a modulation operator. In the particular case when there is only one window G = {g}, the frame (g, Λ) is called a Gabor frame. For Gabor frames, injectivity and stability results have been established only in the case when Λ = ℤ_M×ℤ_M <cit.>. In particular, <cit.> provides a condition on the window g that is sufficient for phase retrievability of the full Gabor frame (g, ℤ_M×ℤ_M). Reducing the cardinality of Λ below M^2 is, however, a complicated task. Moreover, one can show that the phaseless measurement map 𝒜_(g, ℤ_M×ℤ_M) lacks injectivity in the case when the window g has short support or x is allowed to have many consecutive zeros <cit.>. A possible remedy for this problem is to simultaneously use several windows and consider phase retrieval with multi-window Gabor frames. In <cit.>, Han et.al. establish maximal span property for a full multi-window Gabor frame (G, ℤ_M×ℤ_M), under the condition that ambiguity functions of the windows in G do not vanish simultaneously. As maximal span property implies phase retrievability of a frame, their result generalizes the condition obtained for (single-window) full Gabor frames in <cit.>. In <cit.>, Li et.al. consider frames (G, T×ℤ_M) with | T| = M/L and R≥ L, for a separation parameter L. They prove necessary and sufficient conditions for injectivity of 𝒜_(G, Λ), depending on the support size of the window g. Note that in both <cit.>, phase retrievability of a multi-window Gabor frame (G, Λ) is established for | (G, Λ) | = R |Λ| = O(M^2). §.§ Main contribution In this paper, we manage to significantly reduce the number of measurements required to achieve injectivity of 𝒜_(G, Λ). Let C>3 be a constant. Phase retrieval can be done on ℂ^M from CM(1+3β(M,C)) multi-window Gabor frame phaseless measurements, where β(M,C) is a measure of pseudorandomness defined in (<ref>) below. It follows from <cit.> that β(M,C) ≲log(M), and thus phase retrieval can be done on ℂ^M from at most O(Mlog(M)) multi-window Gabor frame phaseless measurements, which is a significant improvement in comparison with O(M^2). Furthermore, we show that, with a similar construction of the window set, phase retrieval can be done from Cd(1+3β(d,C)) multi-window Gabor frame phaseless measurements on any d-dimensional subspace of ℂ^M. In contrast with <cit.> and <cit.>, where the proof of phase retrievability of (multi-window) Gabor frames relies on the properties of the ambiguity function of the window(s), we utilize the polarization idea of <cit.>. We construct the set of windows so that the phaseless measurements corresponding to the auxiliary windows can be used to compute (relative) phases of the measurements corresponding to the primary window. §.§ Notation and definitions The following notation is used throughout the paper. * 𝕊^M-1 = { x∈ℂ^M ‖ x ‖_2 = 1} is the unit sphere in ℂ^M; * x⊙ y(m) = x(m)y(m) denotes the coordinatewise product of vectors x, y∈ℂ^M; * for a vector b∈ℂ^k, (b) = (b | T_1 b | … | T_k b ) denotes the circulant matrix whose columns are obtained by shifting vector b; * for a subset A⊂{0,…, k-1}, 1_A denotes its characteristic function and 𝐏(A) = | A | / k denotes the density of A. Furthermore, the following definitions are used in the paper. We define the Fourier bias of a set A⊂{0,…, k-1} as ‖ A ‖_u = max_m≠ 0|ℱ(1_A)(m)|. The Fourier bias of a set is a non-negative quantity which is equal to zero only for A = {0,…, k-1} and A = ∅. It can get as large as the set density but is usually smaller <cit.>. Essentially, the Fourier bias of a set measures the maximal correlation of its indicator function with discrete harmonic functions. As for random sets this correlation is low with high probability, Fourier bias is used in additive combinatorics to measure pseudorandomness <cit.>. In our construction, we are interested in small cardinality sets that have small Fourier bias. In particular, the cardinality of the constructed phase retrievable multi-window Gabor frame depends on the following quantity β (M, C) = min_P⊂ℤ_M P∅{| P |‖ P ‖_u ≤C-3/C-1𝐏(P) }. To construct the set of windows G for a phase retrievable frame (G, Λ), we employ some tools from algebraic graph theory. For a d-regular graph 𝒢 on n vertices, let d = λ_0 ≥λ_1≥⋯≥λ_n denote the eigenvalues of its adjacency matrix. We define the spectral gap of 𝒢 as (𝒢) = 1 - 1/dmax_j 0|λ_j|. Clearly, a graph is disconnected if and only if its spectral gap is equal to 0. More generally, large (G) ensures good connectivity properties of graph G <cit.>. The remaining part of this paper is organized as follows. In Section <ref> we describe the construction of the window set, and prove phase retrievability of the respective multi-window Gabor frame, under certain assumptions on the primary window. In Section <ref>, we generalize the results of Section <ref> to show that the sufficient number of measurements with respect to the constructed multi-window Gabor frame depends on the dimension of the signal space rather than on the ambient dimension of the problem. We conclude the paper with a brief discussion of the future research directions in Section <ref>. § PHASE RETRIEVABLE MULTI-WINDOW GABOR FRAMES In this paper, we propose a construction of the set of windows G, such that the corresponding multi-window Gabor frame has injective associated phaseless measurement map 𝒜_(G, Λ). Our construction is inspired by the idea of the polarization algorithm <cit.>. Let us consider the set of windows G = {g}∪ G', where we distinguish a primary window g and call the rest of the windows in G' auxiliary. We construct auxiliary windows so that phaseless measurements of a signal with respect to (G', Λ) can be used to compute relative phases between (some of) the phaseless measurements with respect to (g, Λ). More precisely, G' = { g_qpt = g⊙ s_qpt}_q ∈ Q, p∈ P t∈{0, 1, 2}, where s_qpt (m) = 1 + e^2π i ( mp/M + t/3)g(m-q)/g(m). Let (G, Λ) be a multi-window Gabor frame with the set of windows G = {g}∪ G', where G' is defined as in (<ref>). Then, for any (k,ℓ)∈Λ, q∈ Q, and p∈ P, ⟨ x, π(k,ℓ)g ⟩⟨ x, π(k+q, ℓ + p)g ⟩ = e^2π i k p/M/3∑_t = 0^2 e^2π i t/3|⟨ x, π(k, ℓ)g_qpt⟩|^2 First, let us observe that by definition of g_qpt, ⟨ x, π(k,ℓ)g_qpt⟩ = ∑_m∈ℤ_Mx(m)e^-2π i ℓ m/Mg(m-k)s_qpt(m-k) = ∑_m∈ℤ_Mx(m)e^-2π i ℓ m/Mg(m-k) + ∑_m∈ℤ_Mx(m)e^-2π i (ℓ (m+p)/M + t/3)e^2π i kp/Mg(m - (k+q) = ⟨ x, π(k, ℓ)g⟩ + e^-2π i t/3e^2π i kp/M⟨ x, π(k+q, ℓ+p)g⟩. By applying the polarization identity ab = 1/3∑_t = 0^2 e^2π i t/3| a + e^-2π i t/3b|^2,  a,b∈ℂ with a = ⟨ x, π(k, ℓ)g⟩ and b = e^2π i kp/M⟨ x, π(k+q, ℓ+p)g⟩, we obtain the desired equality. To show that the multi-window Gabor frame (G, Λ) constructed above is phase retrievable, we need additional assumptions on the primary window g and index sets P and Q. Window g∈ℂ^M is nowhere vanishing, such that the corresponding Gabor frame (g, Λ) is full-spark, that is, any M vectors in (g, Λ) are linearly independent. Note that the set of g∈𝕊^M-1 for which Assumption <ref> is satisfied is a full measure set in 𝕊^M-1 <cit.>. In particular, if g∼Unif.( 𝕊^M-1), then Assumption <ref> is satisfied with probability 1. A subset Q⊂ℤ_M satisfies ‖ Q‖ _u ≤ c 𝐏(Q), for some constant c∈(0,1). Note that for any subset Q⊂ℤ_M we have ‖ Q‖ _u ≤𝐏(Q), and equality holds only for sets Q with very specific structure (namely, for cosets of a proper subgroup of ℤ_M) <cit.>. Subsets that satisfy Assumption <ref> should have small Fourier bias. Such subsets are called linearly uniform or pseudo-random. In particular, if we generate a subset Q at random, by uniformly and independently selecting elements of ℤ_M with probability c^2log(M)/9M, then Assumption <ref> is satisfied with high probability <cit.>. We formulate our result as follows. Let g∈ℂ^M satisfy Assumption <ref> and Λ = T× F⊂ℤ_M×ℤ_M with |Λ| > CM, for some C>3. Suppose further that sets Q⊂ T-T and P⊂ F-F satisfy Assumption <ref> with c = C-3/C-1. Then (G, Λ) with G = {g}∪ G' defined as in (<ref>) is a phase retrievable frame. Let us consider a graph (Λ, E) with the set of vertices Λ and the set of edges E = {( (k, ℓ), (k', ℓ') ) k' - k∈ Q, ℓ' - ℓ∈ P}⊂Λ×Λ. Then, for any edge e = ((k, ℓ), (k', ℓ'))∈ E, such that |⟨ x, π(k, ℓ)g⟩|≠ 0 and |⟨ x, π(k', ℓ')g⟩|≠ 0, using Lemma <ref> we obtain that the relative phase ω_e = ⟨ x, π(k, ℓ)g ⟩/|⟨ x, π(k, ℓ)g ⟩|( ⟨ x, π(k', ℓ')g ⟩/|⟨ x, π(k', ℓ')g ⟩|)^-1 can be computed from phaseless measurements 𝒜_(G', Λ) as e^2π i k p/M/3|⟨ x, π(k, ℓ)g ⟩||⟨ x, π(k', ℓ')g ⟩|∑_t = 0^2 e^2π i t/3|⟨ x, π(k, ℓ)g_qpt⟩|^2, where p = ℓ' - ℓ and q = k' - k. We are going to use the obtained graph (Λ, E) with weighted edges to reconstruct (up to a global phase shift) the phases of (a subset of) the frame coefficients of x with respect to the Gabor frame (g, Λ). Note that for any (k,ℓ)∈Λ, such that |⟨ x, π(k, ℓ)g⟩| = 0, the relative phase ω_e is not defined for any e = ((k, ℓ), (k', ℓ'))∈ E, thus we delete these edges from the graph to obtain a modified graphs (Λ, E') with the weighted edges, where E' = E∖{(π, π')|⟨ x, π(π)g⟩| = 0 or |⟨ x, π(π')g⟩| = 0}. The graph (Λ, E') constructed above has a connected component of size at lest M. Let A be the adjacency matrix of the graph (Λ, E). By construction of E, A = (1_Q )⊗(1_P ), where ⊗ denotes the Kronecker product. Then, the eigenvalues of A are given by λ_jj'(A) = λ_j((1_Q )) λ_j'((1_P )) = ∑_m∈ℤ_M1_Q(m)e^-2π i j m/M∑_m'∈ℤ_M1_P(m')e^-2π i j' m'/M, as the eigenvalues of a circulant matrix (1_Q ) are given by the entries of the Fourier transform Mℱ(1_Q ). Since |λ_j((1_Q ))| ≤∑_m∈𝐙_M|1_Q(m)e^-2π i j m/M| = | Q |, with equality when j = 0, it follows that λ_max((1_Q )) = λ_0((1_Q )) = | Q |. Similarly, λ_max((1_P )) = λ_0((1_P )) = | P |, and λ_max(A) = λ_00(A) = | Q || P |. Using this and the definition of the Fourier bias ‖·‖_u of a set, we get (Λ, E) = 1 - 1/| Q || P |max_(j,j') (0,0)|λ_jj'(A)| = 1 - 1/| Q || P |max_(j,j') (0,0)|λ_j((1_Q ))||λ_j'((1_P ))| = 1 - max{M/| Q |‖ Q ‖_u, M/| P |‖ P ‖_u }, that is, as both P and Q satisfy Assumption <ref> and |Λ| >CM, (Λ, E) = 1 - max{‖ Q ‖_u/𝐏(Q) , ‖ P ‖_u/𝐏(P)}≥2/C-1≥2M/|Λ| - M. The graph (Λ, E') is obtained from (Λ, E) by removing k = |{(λ, λ')|⟨ x, π(λ)g⟩| = 0 or |⟨ x, π(λ')g⟩| = 0}| edges. By Assumption <ref>, (g, Λ) is a full spark frame, thus |{π∈Λ|⟨ x, π(λ)g⟩| = 0 }|≤ M-1 for any x≠ 0, and k ≤ |P||Q|(M-1). Applying <cit.>, we obtain that (Λ, E') has a connected component of size at least ( 1 - 2M/|Λ|(Λ, E))|Λ| = M. Using Claim <ref>, let us fix (Λ', E”) to be a connected component of (Λ, E') with |Λ'|≥ M and E” = E'∩Λ' ×Λ'. By Assumption <ref>, (g, Λ) is a full spark frame, thus any signal x∈ℂ^M can be reconstructed from the set of its frame coefficients {⟨ x, π(λ)g ⟩ = ⟨ x, π(λ)g ⟩/|⟨ x, π(λ)g ⟩|√(|⟨ x, π(λ)g ⟩|^2)}_λ∈Λ'. For this reason, to uniquely recover x, it is enough to determine (up to a global phase shift) the phases of the frame coefficients ⟨ x, π(λ)g ⟩/|⟨ x, π(λ)g ⟩|, for all λ∈Λ'. To do so, we iteratively propagate relative phases ω_e, e∈ E” inside the connected component (Λ', E”) or apply angular synchronization algorithm <cit.>. The Main Theorem from Section <ref> can be deduced from Theorem <ref> by choosing g∼Unif.( 𝕊^M-1), Λ = T×ℤ_M with | T | = C, Q = T-T, and P being a minimizer in (<ref>). Note that the proof of Theorem <ref> does not only show that under Assumptions <ref> and <ref> the multi-window Gabor frame is phase retrievable, but also suggests a reconstruction algorithm that is similar to  <cit.>. The number of vectors in ({g}∪ G', Λ) is |Λ| (1 + 3| Q|| P |) = O(| Q|| P | M). To reduce the cardinality the frame we constructed, we would like to be able to construct small subsets P,Q⊂ℤ_M with small Fourier bias. In particular, for random subset P⊂ℤ_M of cardinality | P | = O(log M) it has been shown in <cit.> that ‖ P ‖_u < c𝐏(P) for some c∈ (0,1) with high probability. Using this observation, we deduce the following corollary also proven in <cit.>. Let g∼Unif.( 𝕊^M-1) and Λ = T×ℤ_M with | T | = C. Suppose further that Q = T-T and P is a random subset of ℤ_M, such that 1_P(m)∼i.i.d. Bernoulli(αlog(M)/M). Then, with high probability, (G, Λ) with G = {g}∪ G' defined as in (<ref>) is a phase retrievable frame. § MULTI-WINDOW GABOR PHASE RETRIEVAL UNDER LOWER-DIMENSIONAL PRIORS In this section, we generalize findings of Theorem <ref> to the case when there is some prior knowledge available on the signal of interest x. More precisely, we study how the number of phaseless multi-window Gabor measurements sufficient for reconstruction of x changes in the case when x is an element of an (unknown) lower-dimensional subspace of ℂ^M. Let g∈ℂ^M satisfy Assumption <ref> and Λ = T× F⊂ℤ_M×ℤ_M with |Λ| > Cd, for some C>3. Suppose further that sets Q⊂ T-T and P⊂ F-F satisfy Assumption <ref> with c = C-3/C-1. Then, for any W∈ℂ^d× M with d≤ M and (W) = d, the phaseless map 𝒜_(G, Λ) with G = {g}∪ G' defined as in (<ref>) is injective on {x∈ℂ^M x = Wh,  h∈ℂ^d}. First, note that for x= Wh, we have 𝒜_(G, Λ)(x) = 𝒜_Ψ(h), where Ψ = {W^*φφ∈ (G, Λ)}. As x is uniquely determined by h, it is enough to show that h can be uniquely (up to a global phase factor) recovered from its phaseless measurements 𝒜_Ψ(h). Let us write Ψ = Ψ_g∪Ψ_G', where Ψ_g = {W^*π(λ)g}_λ∈Λ and Ψ_G' = {W^*π(λ)g_qpt}_λ∈Λ, g_qpt∈ G'. Similar to the proof of Theorem <ref>, we are going to use Ψ_G' to compute relative phases between the frame coefficients of h with respect to Ψ_g. Indeed, following the proof of Lemma <ref>, we observe that for λ = (k,ℓ) and λ' = (k+q, ℓ+p) W^*π(λ)g_qpt = W^*(π(λ)g + e^-2π i t/3e^2π i kp/Mπ(λ')g ) = W^*π(λ)g + e^-2π i t/3e^2π i kp/M W^*π(λ')g. Thus, Lemma <ref> can be used to show that phaseless measurements of h with respect to Ψ_G' allow us to compute relative phases ω_e for all e = ( λ, λ') ∈ E with |⟨ h, W^*π(λ)g ⟩|≠ 0 and |⟨ h, W^*π(λ')g ⟩|≠ 0, where E is defined as in (<ref>). Ψ_g = {W^*π(λ)g}_λ∈Λ⊂ℂ^d is a full spark frame. By Assumption <ref>, (g, Λ) is a full spark frame, that is, for any distinct λ_1, …, λ_k∈Λ, ( π(λ_1)g, …, π(λ_k)g ) = min{k, M}. Since (W) = d, it follows that ( W^*π(λ_1)g, …, W^*π(λ_k)g ) = min{d, k, M}. Thus, vectors W^*π(λ_1)g, …, W^*π(λ_k)g are linearly independent if k≤ d. Let us consider the graph (Λ, E). From Claim <ref>, it follows that |{λ∈Λ|⟨ h, W^*π(λ)g ⟩| = 0}|≤ d-1, thus the number of edges e∈ E, for which ω_e is not defined is at most | P|| Q | (d-1). Applying Claim <ref> with d in place of M and <cit.>, we derive that deleting these edges from the graph leads to a connected component (Λ', E”) of size |λ'|≥ d. From Claim <ref> it follows that h can be recovered from its frame coefficients with respect to {W^*π(λ)g }_λ∈Λ'. The proof is then concluded by computing the phases of the frame coefficients {⟨ h, W^*π(λ)g⟩}_λ∈Λ' from the relative phases ω_e, e∈ E” using phase propagation or angular synchronization. Note that, similarly to Corollary <ref>, by selecting the window g and the sets P and Q at random, one can derive that for any W∈ℂ^d× M the phaseless map 𝒜_(G, Λ) with | (G, Λ)| = O(dlog(d)) is injective on {x∈ℂ^M x = Wh,  h∈ℂ^d} with high probability. That is, in the case when x is known to be an element of a lowed-dimensional subspace, the ambient dimension M can be replaced in the sufficient number of phaseless measurements with the subspace dimension d. § DISCUSSION In this paper, we showed how polarization idea <cit.> can be used to construct phase retrievable multi-window Gabor frames of small cardinality. As the construction of such frames relies on the small subsets of ℤ_M with small Fourier bias, explicit construction of such subsets for every M can further reduce the number of the phaseless measurements required for the signal reconstruction. In Section <ref>, we discussed multi-window Gabor phase retrieval under the assumption that the set of signals we aim to recover lies in a lower-dimensional subspace of ℂ^M. Surely, this kind of priors is not general enough to be used in practice. A more interesting, both from mathematical and practical points of view, class of priors are generative priors studied in <cit.>. There we assume that x = W(h), h∈ℂ^d, for some non-linear generative map W given, for instance, by a neural network. Studying phase retrievability of the multi-window Gabor frames and determining how the sufficient number of the phaseless measurements changes under such generative priors is an important direction for further research. § ACKNOWLEDGEMENTS Palina Salanevich is supported by NWO Talent programme Veni ENW grant, file number VI.Veni.212.176. § § # 1#1 alpha
http://arxiv.org/abs/2307.01926v1
20230704211924
Dynamical Computing on the Nanoscale: Superconducting Circuits for Thermodynamically-Efficient Classical Information Processing
[ "Christian Z. Pratt", "Kyle J. Ray", "James P. Crutchfield" ]
cond-mat.stat-mech
[ "cond-mat.stat-mech", "cs.ET", "nlin.CD" ]
2307.XXXXX czpratt@ucdavis.edu kjray@ucdavis.edu chaos@ucdavis.edu Complexity Sciences Center and Department of Physics and Astronomy, University of California, Davis, One Shields Avenue, Davis, CA 95616 Alternative computing paradigms open the door to exploiting recent innovations in computational hardware. Dynamical computing is one such paradigm that synthesizes momentum computing—an extremely energy-efficient design framework—with nanoscale thermodynamic computing. This synthesis can be implemented with Josephson junction technology, giving a testbed with tunable coupling to a thermal environment. Investigating the dynamics and thermodynamics of these superconducting quantum interference devices (SQUIDs), though, requires (i) constructing physically-realizable superconducting circuits, (ii) thoroughly understanding circuit energetics, and (iii) designing sufficiently complex circuits that support a suite of useful operations. First-principle circuit design leads to prohibitive algebraic complications that to-date precluded achieving these goals quickly and efficiently. We circumvent these complications by (i) specializing our class of circuits and operating regime, (ii) synthesizing existing methods to suit these specializations, and (iii) implementing solution-finding optimizations that facilitate physically interpreting circuit degrees of freedom and that respect physically-grounded constraints. Practically, this leads to efficient circuit prototyping and analysis and access to scalable circuit architectures. The analytical efficiency is demonstrated by directly reproducing the thermodynamic potential induced by the variable β rf () SQUID. We then show how inductively couple two SQUIDs yielding a device capable of performing 2-bit computations. The methods detailed here provide a basis to construct universal logic gates and investigate their thermodynamic performance. Dynamical Computing on the Nanoscale: Superconducting Circuits for Thermodynamically-Efficient Classical Information Processing James P. Crutchfield August 1, 2023 ================================================================================================================================= § INTRODUCTION All computation is physical—to effect information processing, a sequence of stochastic transformations systematically manipulates a system's potential energy landscape <cit.>. Reliable computing, in particular, requires stable memory states physically supported by a system's information-bearing degrees of freedom <cit.>. Energy minima on the landscape provide this dynamical stability. Computation, then, consists of externally controlling the creation, destruction, and location of energy minima. This perspective allows quantitatively comparing the computational capabilities and thermodynamic performance of alternative computing paradigms <cit.>. Dynamical computing aims to consolidate momentum computing <cit.> and thermodynamic computing <cit.>. The result is a paradigm capable of carrying out highly energy-efficient computations, which can be practically implemented using superconducting circuit nanotechnology. Exploring a superconducting circuit's ability to perform computational operations involves understanding the device's energetics and subsequent dynamical equations of motion <cit.>. The following introduces a method to create physically-realizable dynamical computing devices at the nanoscale. Success in this, though, requires rapidly prototyping devices. And this, in turn, demands a calculational framework that can quickly assess the performance of candidate circuits. The following synthesizes several previous approaches, specializing them to a class of circuits that are of practical interest. The result is a superconducting circuit formalism that generates an interpretable circuit Lagrangian and associated equations of motion given in terms of classical information-bearing degrees of freedom. In this, a circuit's potential energy surface is used to gauge its computational capabilities. The framework's success is demonstrated through the example of the variable β rf (vβ-rf) superconducting quantum interference device (SQUID) <cit.>. Inductively coupling two vβ-rf SQUIDs produces a device that performs 2-bit computations. § RELATED WORK While our development synthesizes methods in Refs. <cit.>, it specializes to a particular class of circuits and investigates the dynamical and thermodynamical behavior of the circuit's degrees of freedom in the classical domain. Its methodological foundations build on Refs. <cit.>, which introduced a network-theoretic approach to electrical circuit analysis and investigated circuits operating in the quantum regime. Reference <cit.> provided an elegant technique for multi-loop circuits to find irrotational degrees of freedom. However, it considered only the phase space of a quantum circuit's Hamiltonian. In this way, it departs from our goals. Moreover, to avoid cyclic coordinates in the equations of motion, Ref. <cit.> restricted each circuit loop to have only a single inductor. The following, in contrast, eschews this restriction. It instead develops optimal solutions for circuits containing more than one inductor by eliminating the extra degrees of freedom algebraically. Here, we use the resistive capacitive shunted junction (RCSJ) model for each Josephson junction (JJ). Due to this, the dissipative dynamics arising from finite-valued DC resistances must be accounted for. And, to do this, we rely on Ref. <cit.>, which provided a method that uses the Rayleigh dissipation function <cit.> to analyze a circuit's resistive shunts. Several alternative approaches are available to analyze circuit behaviors in the quantum regime. One common procedure employs number-phase quantization <cit.>, which does not involve a network-theoretic approach. Simulations of the quantum dynamics of similar circuits are detailed in Ref. <cit.>. This all noted, though the SQUIDs we employ are often the basis for quantum computing devices, we concentrate on their behavior in the classical nonlinear dynamical regime. Finally, a complementary approach to circuit analysis considers the charge in a loop, as opposed to flux variables <cit.>. However, previous and proposed experiments pertaining to thermodynamic and momentum computing <cit.> revealed that tuning external fluxes provides a convenient circuit control method. Consequently, this grounds the following in a flux-focused interpretation of circuit behavior. A generalized approach to the techniques implemented in Ref. <cit.> considers arbitrary circuit geometries and electromagnetic fields to construct a Hamiltonian <cit.>. That said, analytical complications that arise in this kind of first-principle method preclude rapidly characterizing alternative circuit designs. Our approach avoids these pitfalls. § SUPERCONDUCTING CIRCUIT ANALYSIS Following Ref. <cit.>, we define a branch to be a particular circuit element, whose time-dependent flux is defined by: ϕ_b = ϕ_b(t) ∫_-∞^tdt' v_b(t')  . This is related to the branch voltage v_b(t), the instantaneous voltage across the circuit element, and the reduced flux φ_b = 2πϕ_b/ ϕ_0, where ϕ_0 is the flux quantum. Before proceeding, several assumptions need to be addressed. To begin, all branches within a circuit correspond to either a Josephson junction (JJ) or an inductor. Corresponding variables are subscripted with a J or L, respectively. All JJs are described by the RCSJ model <cit.>, which is characterized by a critical current I_c <cit.>, capacitance C_J, and resistance R. Each inductive branch is modeled by an inductance L in parallel with a capacitance C_L satisfying the limit C_L/C_J ≈ 0. We adopt C_L as an auxiliary variable in a fashion similar to Ref. <cit.>, in that the limit is used at a particular step in the calculations, which is exemplified in Sections <ref> and <ref>. Suppose a circuit is constructed with n JJs and m inductors for a total of N = n + m branches. The branch flux vector (ϕ_J_1, …,ϕ_J_n,ϕ_L_1, …,ϕ_L_m)^𝖳 compactly represents all circuit branch fluxes. When computing the potential and equations of motion, we refer to the truncated branch flux vectors (ϕ_J_1, …, ϕ_J_n)^𝖳 and (ϕ_L_1, …, ϕ_L_m)^𝖳. The energy stored in the capacitive components is <cit.>: ℒ_T = 12^𝖳 , where the capacitance matrix is: diag (C_J_1,..., C_J_n, C_L_1,...,C_L_m)  . Since we assume that all branches are either inductors or JJs, the energy stored in the inductive elements can be calculated using only . The m × m inductance matrix denotes the circuit's linear inductances, with diagonal entries corresponding to self-inductances L_i and off-diagonal entries corresponding to the mutual inductive coupling -M_ij between L_i and L_j≠ i. The energy stored in these components is given by <cit.>: ℒ_L = 12^𝖳^-1 . Up to a constant, the JJ potential energy contribution is <cit.>: ℒ_J = - ∑_i=1^n E_icos( 2π/ϕ_0)  , where E_i = (ϕ_0/2π) I_c is the Josephson energy of the ith JJ in a circuit. Equations (<ref>) and (<ref>) together give the circuit's conservative potential energy ℒ_V ℒ_J + ℒ_L. Given a physical circuit consisting of inductors and JJs as described above, the circuit Lagrangian ℒℒ_T - ℒ_V is, up to a constant: ℒ = 12^𝖳 - 12^𝖳^-1 + ∑_i = 1^n E_icos( 2π/ϕ_0)  . The nonconservative dissipation from the finite JJ resistive shunts are taken into account by the Rayleigh dissipation function 𝒟, and further incorporated into the Euler-Lagrange equations of motion <cit.>, in terms of generalized coordinate q_i, as: ddt∂ℒ∂q_i - ∂ℒ∂ q_i = -∂𝒟∂q_i , with: 𝒟∑_i=1^n12R_i (ϕ_J_i)^2  . 𝒟 accounts for the dissipated power in each JJ branch due to its shunt resistance R_i in terms of its branch flux ϕ_J_i. Recalling that only JJ branches have DC resistance values, we rewrite Eq. (<ref>) as: 𝒟 = 12^𝖳^-1 , whereby, following the same logic as with ^-1, has dimensions of n × n. However, unlike , is manifestly diagonal. Despite the fact that Eq. (<ref>) marginally accommodates the circuit's topology, it does not account for fluxoid quantization conditions <cit.>. These require that the sum of the branch fluxes around any loop equals the external flux threading the loop. As a result, while there may appear to be N = n+m degrees of freedom in the Lagrangian, there are only N-F degrees of freedom in a circuit with F independent loops—i.e., loops that contain no other loops—threaded by external fluxes. In view of this, the external flux vector (ϕ_x_1,...,ϕ_x_F)^𝖳 is defined to cast fluxoid quantization in matrix form <cit.>: = 𝐑 . The F × N matrix 𝐑 is constructed in such a way that its elements R_ij satisfy the following criteria: When denoting 𝖫_i to be the ith loop threaded by the external flux which may contain branch flux ϕ_j, then: R_ij +1 ϕ_j∈𝖫_i same orientation as , -1 ϕ_j∈𝖫_i opposite orientation as ,  and 0 ϕ_j ∉𝖫_i . Finally, the circuit's degrees of freedom are defined as (ϕ_1, …,ϕ_N-F)^𝖳. Generally, these are a to-be-determined linear combination of the branch fluxes represented by the (N-F) × N matrix : = 𝐌 . Furthermore, due to fluxoid quantization, no more than N-F degrees of freedom in the circuit are expected. The quantization conditions are included by utilizing the N × 1 augmented vector and the N × N augmented matrix 𝐌_+: [ ; ] and [ 𝐌; 𝐑 ] . Note that the branch flux vector and the augmented flux vector are directly related to each other through by: . With this, the circuit Lagrangian and associated equations of motion can be written in terms of by substituting =^-1 into Eq. (<ref>). Specifically, to find the circuit's Lagrangian in terms of , must be invertible. Provided that the columns of are chosen to be linearly independent of each other and of the columns of , nonsingularity of is guaranteed. However, ambiguity remains in defining 's elements. Following Ref. <cit.>, the degrees of freedom are deemed irrotational by ensuring they satisfy the following constraint: ^-1^𝖳 = 0 , which guarantees that the Lagrangian, when written in terms of , does not depend on Φ_x. Due to this, is referred to as the irrotational flux vector. In addition, Eq. (<ref>) allows the equations of motion to be of Langevin form, further enabling thermodynamical analyses of the circuit's information-bearing degrees of freedom. However, even after enforcing the irrotational constraint, there is still additional freedom in defining . To address this, we turn to the kinetic energy term: ℒ_T = 12_b^𝖳_b = _+^𝖳 (^-1)^𝖳𝐌_+^-1_+ = 12_+^𝖳_+  . With Eq. (<ref>) in mind, recall that the goal is to obtain an easily interpretable Lagrangian and corresponding equations of motion for a given circuit. A diagonal allows for a straightforward interpretation of ℒ_T as the kinetic energy of the Lagrangian in both the and the bases. In other words, the task is to find solutions of that yield a diagonal . Analyzing a number of cases established a set of calculational guidelines that result in a diagonal when solving for the components of through Eq. (<ref>). These aid in the task of finding optimal solutions in the continuous family of possible solutions: * The first n rows of can each contain up to n nonzero entries corresponding to the n JJ coefficients of , which will have the same magnitude. The other m inductive elements of , corresponding to the inductive coefficients in each of these rows, will either be zero or proportional to C_L/C_J; the latter subsequently vanishes when C_L/C_J → 0. Note that this limit is taken after a solution is found. * The last |m-F| rows of will each contain up to m nonzero entries corresponding to the m inductive flux coefficients of , which also have the same magnitude. All n JJ coefficients in each row will contain zero entries, and all nonzero inductive coefficients are unity herein. Importantly, linear independence between rows must be maintained when implementing these conditions. To briefly illustrate guideline (1), one possible realization is that in each of the n rows, every JJ coefficient takes on a nonzero value only once, while all other JJ coefficients are zero. If each nonzero value is unity, this is equivalent to there being no coordinate-space rotation between branch and irrotational flux coordinates. Guideline (2) stems from a mismatch between the number of loops and inductors. For example, setting |m-F| = 1—i.e., there is one loop which contains more than one inductor—requires setting all JJ coefficients to zero for one solution of Eq. (<ref>). This reflects the inability of an irrotational degree of freedom to describe the additional inductor's behavior in the circuit. Consequently, one cyclic coordinate appears in the circuit Lagrangian; this can be eliminated through determining its equation of motion and subsequently rewriting it in terms of noncyclic irrotational degrees of freedom. Sections <ref> and <ref> below demonstrate this procedure. Once the elements of are determined, the number of dynamical degrees of freedom are interpreted as the irrotational degrees of freedom that are not cyclic <cit.>. Numerically, there are N - F - |m - F|, as there will be N - F irrotational flux coordinates with |m - F| expected to be cyclic. For a multi-loop circuit (F > 1), a diagonal is found only when there are no more JJs than there are irrotational degrees of freedom. Equivalently, the number of inductors in a circuit containing both JJs and inductors must be greater than or equal to F, i.e. m ≥ F. These conditions can also be explained as the following: Each JJ must be physically represented by at least one dynamical degree of freedom, and there must be at least one inductor per independent circuit loop to capture the circuit flux behavior. Below, we illustrate these conditions by example. § EXAMPLE DEVICE DESIGNS The following demonstrates the circuit design method via two examples: A variable β rf SQUID and a circuit that implements 2-bit computations. §.§ Variable beta rf SQUID Consider analyzing the vβ-rf SQUID implemented by Ref. <cit.> and shown in Fig. <ref>. One motivation is to reproduce the two-dimensional potential created by the circuit via the introduced formalism. Notably, though, the result shows that the methodology is not only useful and calculationally efficient, but also reproduces the results of previous approaches. As such, the circuit analysis boils down to finding a coordinate transformation that leaves the equations of motion in the form of Langevin dynamics in terms of dynamical degrees of freedom. Note that: = (ϕ_J_1 ϕ_J_2 ϕ_L ϕ_l_1 ϕ_l_2)^𝖳 , = (ϕ_J_1 ϕ_J_2)^𝖳 , = ( ϕ_L ϕ_l_1 ϕ_l_2)^𝖳 ,  and = (_1 _2 _3 ϕ_x_1 ϕ_x_2)^𝖳 . With every branch orientation in Fig. <ref> pointing upwards, fluxoid quantization gives: 𝐑 = [ 1 0 -1 1 0; -1 1 0 -1 1 ] , where each row's entries correspond to the column orientation of { J_1, J_2, L, l_1, l_2}, respectively. This suggests that: ^-1 = diag(C_J_1^-1, C_J_2^-1, C_L^-1, C_l_1^-1, C_l_2^-1)  . To satisfy Eq. (<ref>), let: 𝐌^𝖳 = [ M_11 M_21 M_31; M_12 M_22 M_32; M_13 M_23 M_33; M_14 M_24 M_34; M_15 M_25 M_35; ] . Then, with the assumption that C_l C_l_1 = C_l_2 = C_L and C_J C_J_1 = C_J_2, each column of 𝐌^𝖳 satisfies: CM_i1 = M_i3 - M_i4 C(M_i2 - M_i1) = M_i4 - M_i5 , with C C_l / C_J and i = 1,2,3. To implement guideline (1) for the first n=2 rows of and guideline (2) for the last |m-F|=1 row of , we write a subset of the solution space of Eq. (<ref>) in the augmented matrix: = [ 𝐌; 𝐑 ] = [ 1/2 1/2 C/4 -C/4 -C/4; -1 1 0 C -C; 0 0 1 1 1; 1 0 -1 1 0; -1 1 0 -1 1 ] . Guidelines (1) and (2) are realized by taking C → 0. Consequently, we expect there to be |m-F|=1 cyclic irrotational degrees of freedom once the circuit Lagrangian ℒ is found. Next, inverting yields: ^-1 = [ 1 -1/2 0 0 0; 1 1/2 0 0 0; 2/3 0 1/3 -2/3 -1/3; -1/3 1/2 1/3 1/3 -1/3; -1/3 -1/2 1/3 1/3 2/3 ] , which, through Eq. (<ref>), aids in computing: = [ 2C_J 0 0 0 0; 0 C_J/2 0 0 0; 0 0 0 0 0; 0 0 0 0 0; 0 0 0 0 0; ] in the expected form. As there are no mutual inductance couplings, the inductance matrix is: = [ L 0 0; 0 l_1 0; 0 0 l_2 ] . Recalling Eq. (<ref>), and writing the circuit Lagrangian from Eq. (<ref>) in terms of irrotational branch fluxes, produces: ℒ = C_J2( 2_1^ 2 + 12_2^ 2) -19L(2_1 + _3 - 21 - 2)^2 -19l_1( -_1 + 3/2_2 + _3 + 1 - 2)^2 -19l_2(-_1 - 3/2_2 + _3 + 1 + 22)^2 +E_2+1cos_1 cos_22 - E_2-1sin_1 sin_22 , where E_2±1 = E_J_2± E_J_1. The Lagrangian is independent of _3 which indicates that it is, as expected, a cyclic degree of freedom. It can be eliminated by computing its equation of motion, finding that _3 = _1 - 1 - 2/2, and substituting this into ℒ. We can now identify a map between the irrotational flux variables and the fluxes appearing in Ref. <cit.>: _1 = ϕ , _2 = ϕ_dc , 1 = ϕ_x - 12ϕ_xdc ,  and 2 = ϕ_xdc . Making these substitutions into Eq. (<ref>) yields a Lagrangian ℒ that matches that of Ref. <cit.> with the preceding variable substitutions: ℒ = ℒ_T - ℒ_vβ-rf = C_J2( 2ϕ^2 + 12ϕ_dc^2) - 12L(ϕ - ϕ_x)^2 - 12l(ϕ_dc - ϕ_xdc)^2 + E_2+1cosφcosφ_dc2 - E_2-1sinφsinφ_dc2 . §.§ Inductively Coupled vβ-rf SQUIDs Consider inductively coupling two vβ-rf SQUIDs through L_1 and L_2 via the mutual inductance M M_12 = M_21, shown in Fig. <ref>. This device enables 2-bit computations, which is physically realized by controlling the tunable circuit parameters ϕ_ix and ϕ_ixdc for i = 1,2. Let's derive the potential. The choice of flux quantization is represented in circuit network-theoretic terms through: = [ 1 0 0 0 -1 0 1 0 0 0; -1 1 0 0 0 0 -1 1 0 0; 0 0 1 0 0 -1 0 0 1 0; 0 0 -1 1 0 0 0 0 -1 1 ] . Using the irrotational constraint ^-1 = 0, we find that the elements of need to satisfy: CM_i1 = M_i5 - M_i7 C(M_i2 - M_i1) = M_i7 - M_i8 CM_i3 = M_i6 - M_i9 C(M_i4 - M_i3) = M_i9 - M_i10 . Taking a lesson from the single vβ-rf case and after taking C → 0, our choice of becomes: = [ 1/2 1/2 0 0 0 0 0 0 0 0; -1 1 0 0 0 0 0 0 0 0; 0 0 1/2 1/2 0 0 0 0 0 0; 0 0 -1 1 0 0 0 0 0 0; 0 0 0 0 1 0 1 1 0 0; 0 0 0 0 0 1 0 0 1 1; ] , whose inverse is: = [ 1 -1/2 0 0 0 0 0 0 0 0; 1 1/2 0 0 0 0 0 0 0 0; 0 0 1 -1/2 0 0 0 0 0 0; 0 0 1 1/2 0 0 0 0 0 0; 2/3 0 0 0 1/3 0 -2/3 -1/3 0 0; 0 0 2/3 0 0 1/3 0 0 -2/3 -1/3; -1/3 1/2 0 0 1/3 0 1/3 -1/3 0 0; -1/3 -1/2 0 0 1/3 0 1/3 2/3 0 0; 0 0 -1/3 1/2 0 1/3 0 0 1/3 -1/3; 0 0 -1/3 -1/2 0 1/3 0 0 1/3 2/3; ] . We then eliminate the cyclic degrees of freedom _5 and _6. Given our solution choice for , the map between our and Ref. <cit.>'s notation is: _i = ϕ_j  , _i+1 = ϕ_jdc , i = ϕ_jx - 12ϕ_jxdc ,  and i+1 = ϕ_jxdc . Here, the index i corresponds either to the ith irrotational degree of freedom or ith external flux. While the index j corresponds to the jth vβ-rf SQUID, for which i = 1,3 and j = 1,2, respectively. Next, the inductive contribution to the potential, when taking L L_1 = L_2 and l l_1 = l_2 = l_3 = l_4, is found by first writing: = [ L -M 0 0 0 0; -M L 0 0 0 0; 0 0 l 0 0 0; 0 0 0 l 0 0; 0 0 0 0 l 0; 0 0 0 0 0 l ] . Then, subsequently taking the inverse gives: ^-1 = [ 1/L_α μ/L_α 0 0 0 0; μ/L_α 1/L_α 0 0 0 0; 0 0 1/l 0 0 0; 0 0 0 1/l 0 0; 0 0 0 0 1/l 0; 0 0 0 0 0 1/l ] , where L_α = α L, α = 1 - μ^2, and μ = M / L. In Ref. <cit.>'s notation, the potential is: ℒ_V = -E_2+1cosφ_1 cosφ_1dc2 + E_2-1sinφ_1 sinφ_1dc2    -E_4+3cosφ_2 cosφ_2dc2 + E_4-3sinφ_2 sinφ_2dc2    + 12l (ϕ_1dc - ϕ_1xdc)^2 + 12l (ϕ_2dc - ϕ_2xdc)^2    + 12L_α(ϕ_1 - ϕ_1x)^2 + 12L_α(ϕ_2 - ϕ_2x)^2    + μL_α(ϕ_1 - ϕ_1x)(ϕ_2 - ϕ_2x)  . If we assume small coupling by keeping only linear terms in μ, then L_α^-1→ L^-1, resulting in Eq. (<ref>) simplifying to be the sum of two vβ-rf SQUIDs potential contributions and a mutual inductance coupling ℒ_M.I.: ℒ_V = ℒ_vβ-rf 1 + ℒ_vβ-rf 2 + ℒ_M.I. . Figure <ref> displays Eq. (<ref>)'s potential. There are four stable energy minima, that can be assigned to the computational memory states 00, 01, 10, and 11. Leveraging the meta-stable regions near each minima reliably stores information. By varying M's values ϕ_ix and ϕ_ixdc, we can process that information. And, then, in turn, controlling the dynamics of the Euler-Lagrange equation of motion implements various 2-bit logic gates. § CONCLUSION We introduced a superconducting circuit formalism that permits exploring the classical information processing of a proposed superconducting circuit through understanding the circuit's energetics and subsequent dynamics. The techniques reproduce potentials used in experimentally investigating information-bearing degrees of freedom <cit.>, as well as constructing a device that supports 2-bit computations. A sequel describes the information processing properties and performance in detail. This is the first communication of a series on physically-realizable dynamical computing. The present goal being to introduce the design formalism. In point of fact, the coupled vβ-rf SQUIDs shown in Fig. <ref> also support the information processing behavior exhibited by a Szilard engine <cit.>. Follow-on efforts explore the thermodynamic properties of these circuits, as well as how to implement 2-bit universal gates—e.g., NAND and NOR—and the universal reversible Fredkin gate using three coupled circuits. § ACKNOWLEDGMENTS The authors thank Scott Habermehl and Greg Wimsatt for helpful discussions, as well as the Telluride Science Research Center for its hospitality during visits and the participants of the Information Engines workshop there for their valuable feedback. J.P.C. acknowledges the kind hospitality of the Santa Fe Institute and California Institute of Technology. This material is based on work supported by, or in part by, the U.S. Army Research Laboratory and U.S. Army Research Office under Grant No. W911NF-21-1-0048.
http://arxiv.org/abs/2307.02856v1
20230706084831
Minimization of the buckling load of a clamped plate with perimeter constraint
[ "Michele Carriero", "Simone Cito", "Antonio Leaci" ]
math.AP
[ "math.AP", "math.OC", "49Q10" ]
We look for minimizers of the buckling load problem with perimeter constraint in any dimension. In dimension 2, we show that the minimizing plates are convex; in higher dimension, by passing through a weaker formulation of the problem, we show that any optimal set is open and connected. For higher eigenvalues, we prove that minimizers exist among convex sets with prescribed perimeter. Deep Ensemble Learning with Frame Skipping for Face Anti-Spoofing Usman Muhammad12, Md Ziaul Hoque1, Mourad Oussalah1 and Jorma Laaksonen 2 1 Center for Machine Vision and Signal Analysis, University of Oulu, Finland 2 Department of Computer Science, Aalto University, Finland August 1, 2023 ======================================================================================================================================================================================================================================================= § INTRODUCTION Let d∈, d≥ 2 and let Ω⊂^d a bounded Lipschitz domain. We say that Λ(Ω) is an eigenvalue of the buckling load problem (briefly, a buckling eigenvalue) if there exists u∈ H^2_0(Ω)∖{0} such that -Δ^2 u=Λ(Ω)Δ u in Ω, u=∂ u/∂ν=0 on ∂Ω. The buckling eigenvalues form an increasing sequence 0<Λ_1(Ω)≤Λ_2(Ω)≤…↗+∞ and, for any h∈, they can be characterized variationally by the min-max formula involving the Rayleigh quotient Λ_h(Ω)=min_V⊂ H_0^2(Ω), V=hmax_u∈ V∖{0}∫_Ω(Δ u)^2 dx/∫_Ω |∇ u|^2 dx. In this paper we mainly focus on the first eigenvalue, i.e. Λ_1(Ω)=min_u∈ H_0^2(Ω)∖{0}∫_Ω(Δ u)^2 dx/∫_Ω |∇ u|^2 dx. The minimum above is achieved only on the solutions of Problem (<ref>). Our aim is to show an existence result for a shape optimization problem involving Λ_1(Ω), whose formulation is appropriated in view of the physical interpretation of the PDE. Indeed, in a 2-dimensional setting, Ω can be thought as a thin elastic plate that is clamped along its boundary ∂Ω and subject to compressive forces (the so called “buckling forces”) across ∂Ω; these forces lie on the same plane as Ω, that may deflect out of its plane when these forces reach a certain magnitude. Λ_1(Ω) is said the “buckling load” of Ω and can be interpreted as the energy associated to the plate Ω in this phenomenon. There are some works in literature treating this problem by constraining the volume of the admissible sets and there is a few information about the minimizers. The first result that uses variational methods is <cit.>, where the authors prove that the problem in ^2 admits a quasi open minimizer without prescribing any bounded design region; moreover, supposing that a minimizer Ω is a sufficiently smooth domain with sufficiently smooth eigenfunctions, the authors show that Ω has to be a disk of maximal area. Several years later, in <cit.> it is proved an existence result in dimension 2 and 3 with a different technique based on the eigenfunctions, but introducing a (big) bounded design region to assure extra compactness. Recently, <cit.> provided an existence result for minimizers in the whole of ^d based on a mixed strategy: a concentration-compactness argument inspired by <cit.> to get a limit function u and a regularity argument for u to build the optimal open set using the superlevel sets of continuous functions; moreover, the author shows that the open minimizers are also connected, but nothing is proved about the regularity of the boundary. On the other hand, it seems that there are no available results in literature for higher eigenvalues, even if the admissible shapes satisfy some topological constraint. Nevertheless, in view of the physical interpretation of the buckling eigenvalues, it seems reasonable to replace the volume constraint with the perimeter constraint; in this work, given p>0, we focus on the following problem min{Λ_1(Ω):Ω⊂^d open, |Ω|<∞, P(Ω)≤ p}. An appropriate interpretation in ^2 can be the following: given a deformable support (with prescribed length p) wherein the admissible plates can be clamped, we want to find if there exists a plate that minimizes the buckling load due to the buckling forces acting across the support itself; in other words, we look for the optimal plate that can be clamped in the given support. The main result of the paper is the following. Problem (<ref>) admits a solution. Every minimizer is an open connected set. In ^2 any optimal set is open, bounded and convex. We also prove the following existence result for higher eigenvalues in the framework of convex sets. Problem min{Λ_h(Ω):Ω⊂^d open and convex, ℋ^d-1(∂Ω)≤ p}, admits a solution. We finally point out that spectral shape optimization problems governed by higher order PDEs are, in general, harder to handle than the second order counterparts and non existence results often appear. To get acquainted on the non existence of optimal shapes for higher order problems, see several results in <cit.> (and the reference therein). We just mention, for instance, <cit.>, which gives an example of non existence of optimal shapes for a spectral functional neither with volume nor with perimeter constraint among convex sets. The paper is structured as follows. In Section <ref> we give some preliminary results and we fix the notation. In Section <ref> we treat separately the existence case in dimension 2 (it is almost straightforward, but we insert the proof for the reader's convenience). In Section <ref> we deal with the existence in any dimension in a weak framework and we show that the optimal shapes for the weak problem are in fact minimizers for the original problem, completing the proof of Theorem <ref>. In Section <ref> we prove Theorem <ref>. We finally set some open problems and perspectives in Section <ref>. § NOTATION AND PRELIMINARY RESULTS In this section we fix the notation and recall some results used throughout the paper. For x∈ℝ^d and r>0, B_r(x) will denote the open ball of radius r centered in x; when x is omitted, we consider the ball centered in the origin. For every measurable set E⊆ℝ^d, we will use the symbols χ_E for the characteristic function of E, E^c for its complement and tE for the rescaled set {tx:x∈ E}. As usual, |E| and ℋ^s(E) (s>0) stand respectively for the Lebesgue measure and the Hausdorff s-dimensional measure of E; if E is a piecewise regular hypersurface, ℋ^d-1(E) coincides with its area measure. We will denote by ℋ-dim(E) the Hausdorff dimension of the set E; for sufficiently regular sets, it coincides with the topological dimension of the set E, e.g. if E is an open set of ℝ^d, ℋ-dim(E)=d (for further details see Chapter 2, Section 8 in <cit.>). For every open set Ω⊂^d, we will denote by L^p(Ω) the usual Lebesgue space of (classes of) p-summable functions, by W^k,p(Ω) the Sobolev space of functions whose (weak) derivatives are p-summable up to order k and by H^k(Ω) the (Hilbert) space W^k,2(Ω); whenever f∈ L^p(K) or f∈ W^k,p(K) for any compact set K⊂Ω, we say that f∈ L^p_loc(Ω) or f∈ W_loc^k,p(Ω) respectively. For the convenience of the reader, whenever A,B are open sets, A⊆ B and u∈ W^k,p_0(A), we will denote still by u its zero extension to the whole B. Moreover, for any open set Ω and any test function u∈ H^2_0(Ω), we denote the Rayleigh quotient in (<ref>) by R_Ω(u). Let E⊆ℝ^d be measurable and let Ω⊆ℝ^d be open. We define the perimeter of E in Ω as P(E,Ω):=sup{∫_Ediv(φ) dx : φ∈ C^1_c(Ω;ℝ^d),φ_∞≤ 1} and we say that E is of finite perimeter in Ω if P(E,Ω)<+∞. If Ω=ℝ^d we simply say that E is of finite perimeter and denote its perimeter by P(E). Let us recall that, if E is sufficiently regular (e.g. if E is either a bounded Lipschitz domain or a convex set), it holds P(E,Ω)=ℋ^d-1(∂ E∩Ω). In order to minimize Problem (<ref>) and its weak version, Problem (<ref>), using the direct methods of the Calculus of Variation (or some concentration-compactness argument), we need lower semicontinuity of the buckling eigenvalues with respect to the some suitable topology on the class of sets of ℝ^d where the problem is set. As we will see, two good choices for our purposes are the Hausdorff topology of open sets (in the 2-dimensional setting, where the perimeter constraint and the monotonicity of the functional imply the convexity of the optimizers) and the L^1-topology (in higher dimension, where a weak formulation in the class of measurable sets is needed). We say that a sequence of measurable sets (Ω_n)_n converges in measure to a measurable set Ω if |Ω_nΔΩ|→ 0, namely if χ_Ω_n→χ_Ω in L^1(^d). We say that (Ω_n)_n locally converges in measure to Ω if χ_Ω_n→χ_Ω in L^1_loc(^d). This kind of convergence turns out to be suitable to our purposes to have compactness of minimizing sequences of sets of finite perimeter. Let A⊂ℝ^d be an open bounded set and let (E_n)_n be a sequence of subsets of A with finite perimeter such that sup_n P(E_n,A)<+∞. Then, there exists E⊆ A with finite perimeter in A such that, up to subsequences, χ_E_n→χ_E in L^1(^d) and P(E,A)≤lim inf_n→∞ P(E_n,A). In general, one of the main disadvantages of the convergence in measure is that no topological properties of converging sequences can be deduced for the limit set in this framework. To this aim, we introduce the Hausdorff convergences. Let A,B⊆ℝ^d be closed. We define the Hausdorff distance between A and B by d_H(A,B):=max{sup_x∈ Adist(x,B),sup_x∈ Bdist(x,A)}. The topology induced by this distance is called Hausdorff topology (or simply H-topology) on closed sets. The counterpart of the Hausdorff topology for open sets is defined below. Let A,B⊆ℝ^d be open. We define the Hausdorff-complementary distance between A and B by d_H^c(A,B):=d_H(A^c,B^c). The topology induced by this distance is called Hausdorff-complementary topology (or simply H^c-topology) on open sets. This topology guarantees the compactness of sequences of open convex sets under suitable hypotheses. The following proposition contains some results proved in <cit.>, Section 2.4, and shows us that Hausdorff convergences preserve convexity and assure continuity for Lebesgue measure and perimeters of convex sets. The following results hold for convex sets: (i) If A⊆ B, then ℋ^d-1(∂ A)≤ℋ^d-1(∂ B); (ii) If A_n, A are closed (respectively open) and convex and A_n→ A with respect to the H-topology (respectively H^c-topology), then χ_A_n→χ_A in L^1; moreover, if for every n∈ it holds ℋ-dim(A)=ℋ-dim(A_n), then ℋ^d-1(∂ A_n)→ℋ^d-1(∂ A). (iii) |A|≤ρℋ^d-1(∂ A), where ρ is the radius of the biggest ball contained in A. (iv) If a sequence (A_n)_n of closed convex sets H-converges to a closed set A, then A is a closed convex set; if a sequence (B_n)_n of open convex sets H^c-converges to an open set B, then B is an open convex set. (v) Let D⊂ℝ^d a fixed compact set. Then, the class of the closed convex sets contained in D is compact in the H-topology and the class of the open convex sets contained in D is compact in the H^c-topology We recall an important result due to F. John (see <cit.>), involving convex sets. Let K⊂^d a compact convex set with non-empty interior. Then, there exists an ellipsoid E⊂^d centered in x_0∈ E such that E⊆ K⊆ x_0+d(E-x_0) (where the ellipsoid x_0+d(E-x_0) is obtained by a dilation of E by a factor d and with center x_0). The following properties of the buckling eigenvalues will be useful throughout the paper. 1 (i) Let Ω_1,Ω_2⊂^d be open sets such that Ω_1⊂Ω_2; then, for any h∈, Λ_h(Ω_2)≤Λ_h(Ω_1). (ii) For every set of finite perimeter Ω⊂^d and t>0 Λ_h(tΩ)=t^-2Λ_h(Ω). The proof of item (i) is straightforward since H^2_0(Ω_1)⊆ H^2_0(Ω_2). Item (ii) can be proved via the natural change of variables tΩ∋ x↦ y=x/t∈Ω in the Rayleigh quotient and the one-to-one correspondence between u∈ H^2_0(tΩ) and u(t ·)∈ H^2_0(Ω). In view of the scaling properties of the perimeter and of the buckling eigenvalues, Problem (<ref>) is equivalent to the scale invariant problem min{P(Ω)^2/d-1Λ_1(Ω):Ω⊂^d open, |Ω|<∞,}. Indeed, for any t>0 one has P(tΩ)^2/d-1Λ_1(tΩ)=(t^d-1)^2/d-1 P(Ω)^2/d-1t^-2Λ_1(Ω)=P(Ω)^2/d-1Λ_1(Ω) Moreover, Problem (<ref>) is also equivalent to the penalized problem min{Λ_1(Ω)+β P(Ω):Ω⊂^d open, |Ω|<∞,} for some β>0. More precisely, if Ω̂ is a solution of Problem (<ref>), there exists β>0 such that Ω̂ is a solution of Problem (<ref>) and, viceversa, if Ω̂ is a solution of Problem (<ref>), then it solves Problem (<ref>) with bound on the perimeters given by p=P(Ω̂). The second implication is straightforward. To prove the equivalence, then, it is sufficient to consider a solution Ω̂ of Problem (<ref>), define the function on _+ F(t):=Λ_1(tΩ̂)+β P(tΩ̂) and show that it attains its minimum in t=1. By the scaling properties of the perimeter and of the eigenvalues we have F(t)=t^-2Λ_1(Ω̂)+β t^d-1P(Ω̂). To conclude, we choose β>0 in such a way that the derivative F'(t)=-2t^-3Λ_1(Ω̂)+β(d-1)t^d-2P(Ω̂) vanishes in t=1, i.e. β=2Λ_1(Ω̂)/(d-1)P(Ω̂). We close this section recalling two useful results involving some properties of the Sobolev spaces. Let 1≤ p<∞ and let f∈ W^1,p(^d). Then ∇ f=0 a.e. on {f=0}. Let m be a positive integer, let 1<p<∞ and let f∈ W^m,p(^d). Let Ω⊂^d be an open set. Then the following statements are equivalent: (a) D^α f=0 everywhere in Ω^c for all multiindices α such that 0≤|α|≤ m - 1; (b) f∈ W^m,p_0(Ω). § EXISTENCE OF OPTIMAL SHAPES FOR THE FIRST BUCKLING EIGENVALUE: THE PLANAR CASE We are able to prove the existence result in dimension two, where the perimeter constraint assures compactness. Problem (<ref>) admits a bounded, open, convex minimizer Ω⊂^2 with maximal boundary length. We first notice some simplifications that can be done. * Since Ω↦Λ_1(Ω) is invariant under translations of the connected components, we can suppose that they lie at zero distance each other; for the same reason, we can suppose that all the admissible shapes Ω are contained in the same bounded design region. Indeed, for any open set Ω⊂^2 the condition P(Ω)≤ p implies diam(Ω)<p/2 and then all the admissible domains can be translated in a bounded design region D⊂⊂^2. * For any admissible Ω, its convex hull Ω̃ is still an admissible set, since it is open and ℋ^1(∂Ω̃)≤ℋ^1(∂Ω). In view of the decreasing monotonicity of the map Ω↦Λ_1(Ω) with respect to the set inclusion, since Ω⊂Ω̃, we have Λ_1(Ω̃)≤Λ_1(Ω). Then, we can reduce ourselves to the class of open convex sets with boundary length less than or equal to p. * In view of the scaling property of Λ_1 and the monotonicity with respect to inclusions, we have that the admissible sets can be assumed to have exactly boundary length equal to p (the perimeter constraint is saturated). In other words, in ^2, without loss of generality we can study min{Λ_1(Ω):Ω⊂ D,Ω open and convex, ℋ^1(∂Ω)=p}. From now on, we denote 𝒜_p:={Ω⊂ D,Ω open and convex, ℋ^1(∂Ω)=p}. Let now (Ω_n)_n be a minimizing sequence for (<ref>) and, for any n∈, let u_n∈ H^2_0(Ω_n) an eigenfunction for Λ_1(Ω_n) with ∇ u_n_2=1. In view of the properties of the Hausdorff convergence (Proposition <ref>), there exist a subsequence (Ω_n_k)_k and an open convex set Ω⊂ D such that Ω_n_k→Ω in the sense of Hausdorff; notice that the convergence is also in measure and that Ω≠∅. Otherwise, in view of the convergence in measure, we would have |Ω_n_k|→ 0 and from the Payne inequality (see, for instance, inequality (3.26) in <cit.>), we would obtain Λ_1(Ω_n_k)≥λ_2(Ω_n_k)→+∞, where λ_2 is the second eigenvalue of the Dirichlet-Laplacian. This would contradict the minimality of the sequence (Ω_n)_n. Then, Ω is an open convex set of positive measure and, in addition, it holds ∂Ω_n_k→∂Ω in the sense of Hausdorff and ℋ^1(∂Ω)=lim_k→+∞ℋ^1(∂Ω_n_k)=p, so Ω is an admissible set for Problem (<ref>). Now, the corresponding subsequence of eigenfunctions (u_n_k)_k is bounded in H^2_0(D); indeed u_n_k_H^2_0(D)=u_n_k_H^2_0(Ω_n_k)=∫_Ω_n_k(Δ u_n_k)^2 dx=Λ_1(Ω_n_k)≤ C. Then, there exist a further subsequence (still denoted with the same index) and a function u∈ H^2_0(D) such that u_n_k⇀ u weakly in H^2_0(D); this implies that u_n_k→ u strongly in H^1_0(D). Moreover, since Ω_n_k converges to Ω in measure, we deduce that u∈ H^2_0(Ω) is an admissible test function for Λ_1(Ω). In view of the lower semicontinuity of the H^2_0-norm with respect to the weak convergence in H^2_0(D) it finally holds Λ_1(Ω)≤∫_Ω(Δ u)^2 dx≤lim inf_k→+∞∫_Ω_n_k(Δ u_n_k)^2 dx=lim inf_k→+∞Λ_1(Ω_n_k)=inf_Ω∈𝒜_pΛ_1(Ω), proving the thesis. § EXISTENCE OF OPTIMAL SHAPES FOR THE FIRST BUCKLING EIGENVALUE: THE CASE OF HIGHER DIMENSION In general, the perimeter constraint does not imply the boundedness and the convexity of optimal shapes in higher dimension (this is a peculiarity of the 2-dimensional case). To prove the existence of optimal shapes in higher dimension, we follow a different strategy. By following the approach of <cit.> (later used in <cit.>), we look for minimizers for Problem (<ref>) with neither topological constraint nor bounded design region via a concentration-compactness argument. The main difference is that in the previous works the authors dealt with a measure constraint, whereas we have to preserve a perimeter constraint. Proposition <ref> suggests a good tool to this aim, since the result guarantees the lower semicontinuity of the perimeter and the compactness with respect to the convergence in measure for a sequence of measurable sets; in view of this, the original framework of open sets does not seem to be the most appropriate to prove an existence result for Problem (<ref>). A good strategy in this sense is provided in <cit.>, where the authors use a suitable weak formulation of the Dirichlet-Laplacian eigenvalues in the framework of sets of finite perimeter. More precisely, in order to set the problem in the class of measurable sets instead of working only with open sets, for every set of finite perimeter Ω⊂^d they introduce the Sobolev-like spaces H̃_0^1(Ω):={u∈ H^1(^d):u=0 a.e. in Ω^c} and they prove the existence of optimal shapes for a weaker version of the functional with perimeter constraint. After proving the existence, they are able to come back to the original problem, showing that weak minimizers are, in fact, open sets. Let Ω⊂^d be a set of finite perimeter. We define the Sobolev-like space H̃_0^2(Ω)⊂ H^2(^d) as H̃_0^2(Ω):={u∈ H^2(^d):u=0 a.e. in Ω^c}. We define the h-th weak buckling eigenvalue by Λ̃_h(Ω)=inf_V⊂H̃_0^2(Ω), V=hmax_u∈ V∖{0}∫_Ω(Δ u)^2 dx/∫_Ω |∇ u|^2 dx. In particular, the first weak buckling eigenvalue of Ω is given by Λ̃_1(Ω)=inf_u∈H̃_0^2(Ω)∖{0}∫_Ω(Δ u)^2 dx/∫_Ω |∇ u|^2 dx, Once given the weaker version of the functional, we look for a right class of admissible sets; our choice is the following: 𝒜̃_p:={Ω⊂^d measurable, |Ω|<∞, P(Ω)≤ p}. The new problem to consider is thus min{Λ̃_1(Ω):Ω∈𝒜̃_p}. The choice of this weaker framework has been made in order to ensure the completeness of the class of admissible sets with respect to the local convergence in measure: in other words, a converging sequence of admissible sets converges (locally in measure) to an admissible set. Notice that, in the original statement (<ref>), this request fails: the limit set of a sequence of open sets converging in measure is not open, in general. For that reason, it has been necessary to choose also the functional in a weaker sense, keeping into account the new functional space for the test functions. Only the inclusion H̃_0^2(Ω)⊇ H_0^2(Ω) is valid in general, even for open sets. Nevertheless, if the set Ω is open and sufficiently regular, the equality H̃_0^2(Ω)=H_0^2(Ω) holds; the equality fails whenever Ω has inner cracks, e.g. if Ω⊂^3 is a ball with an equatorial cut that removes a maximal half-disk, namely Ω=B_1(0)∖{x_3=0, x_1≥ 0}. Then, in general, for any open set Ω it holds Λ̃_h(Ω)≤Λ_h(Ω). Notice that also in Problem (<ref>) we avoid the apriori prescription of a bounded design region where the admissible sets are contained. A similar assumption would lead straightforwardly to the compactness of a minimizing sequence of admissible sets, see <cit.>. We now state some useful properties of the weak eigenvalues. 1 (i) Let Ω_1,Ω_2⊂^d be sets of finite perimeter such that |Ω_1ΔΩ_2|=0; then, for any k∈, Λ̃_h(Ω_1)=Λ̃_h(Ω_2). (ii) Let Ω_1,Ω_2⊂^d be sets of finite perimeter such that |Ω_2∖Ω_1|=0 (i.e. Ω_1⊂Ω_2 up to a ℒ^d-negligible set); then, for any h∈, Λ̃_h(Ω_2)≤Λ̃_h(Ω_1). (iii) For every set of finite perimeter Ω⊂^d and t>0 Λ̃_h(tΩ)=t^-2Λ̃_h(Ω). Item (i) follows by observing that |Ω_1ΔΩ_2|=0 implies H̃^2_0(Ω_1)=H̃^2_0(Ω_2). Items (ii) and (iii) are proven in the same way as in the classical framework. As we proved for Problem (<ref>), also Problem (<ref>) has two equivalent formulations; more precisely it is equivalent to the scale invariant problem min{P(Ω)^2/d-1Λ̃_1(Ω):Ω⊂^d measurable, |Ω|<∞,}. and to the penalized problem min{Λ̃_1(Ω)+β P(Ω):Ω⊂^d measurable, |Ω|<∞,} for some β>0. The proof is the same as in Remark <ref>. We point out that the infimum in (<ref>) is not attained, in general. For that reason, we introduce the term `ε-eigenfunction' to denote a test function u^ε∈H̃_0^2(Ω) that satisfies Λ̃_1(Ω)≤∫_Ω(Δ u^ε)^2 dx/∫_Ω|∇ u^ε|^2 dx<Λ̃_1(Ω)+ε. for some ε>0. The following result is a version of <cit.> adapted to our Sobolev-like spaces. Let (w_n)_n be a bounded sequence in H^1(^d) such that w_n_L^2(^d)=1 and w_n=0 a.e. in Ω_n^c with |Ω_n|≤ C. There exists a sequence of vectors (y_n)_n⊂^d such that the sequence (w_n(·+y_n))_n does not possess a weakly convergent subsequence in H^1(^d). We use the previous result to get the contradiction in the vanishing case in Theorem <ref> as follows: we find a sequence (w_n)_n in H^1(^d) satisfying (after a possible rescaling) w_n_L^2(^d)=1, w_n=0 a.e. in Ω_n^c with |Ω_n|≤ C and such that any possible translation of w_n weakly converges; in view of Lemma <ref> (w_n)_n can not be bounded in H^1(^d) and in particular gradients must be unbounded in L^2(^d). Now, to prove the existence of minimizers for Problem (<ref>), we follow a strategy based on the concentration-compactness principle by P.-L. Lions (see <cit.>) and inspired by <cit.> (a similar argument has been reprised in <cit.>). We only have to be careful with the choice of suitable ε-eigenfunctions and to preserve the perimeter constraint. For another result of spectral shape optimization under perimeter constraint by using a concentration-compactness argument see <cit.>, where the same technique applies in the minimization of the second Dirichlet-Laplace eigenvalue. The main result of this section is the following. Problem (<ref>) admits a measurable solution Ω̂⊂^d with P(Ω̂)=p. First of all, we show that every solution Ω̂⊂^d has maximal perimeter. Otherwise, if P(Ω̂)<p, the dilated set Ω̂':=(p/P(Ω̂))^1/d-1Ω̂ satisfies P(Ω̂')=p and, in view of the decreasing monotonicity and the scaling property of Λ̃_1, it holds Λ̃_1(Ω̂')<Λ̃_1(Ω̂), leading to a contradiction with the minimality of Ω̂. Now we prove the existence of a minimizer for Problem (<ref>). Let (Ω_n)_n⊂𝒜̃_p be a minimizing sequence for Problem (<ref>) and let u_n∈H̃_0^2(Ω_n) a corresponding sequence of normalized weak 1/n-eigenfunctions, namely we have ∫_^d|∇ u_n|^2 dx=1, Λ̃_1(Ω_n)≤∫_^d(Δ u_n)^2 dx<Λ̃_1(Ω_n)+1/n, inf_Ω∈𝒜̃_pΛ̃_1(Ω)=lim_n→+∞Λ̃_1(Ω_n)=lim_n→+∞∫_^d(Δ u_n)^2 dx. We infer u_n_L^2(^d) =u_n_L^2(Ω_n)≤|Ω_n|^2^*-2/2^*· 2u_n_L^2^*(Ω_n)≤ Cu_n_L^2^*(^d)≤ C'∇ u_n_L^2(^d) =C'∇ u_n_L^2(Ω_n)=C', where we used first the inclusion L^2^*(Ω_n)⊂ L^2(Ω_n) and then the Sobolev-Gagliardo-Nirenberg inequality since d>2; all the constants are independent of n since they depend only on the dimension d and on |Ω_n|, that is uniformly bounded. We remark that we do not use directly the Poincaré inequality to show the uniform bound on u_n_L^2(Ω_n) since Ω_n can possibly be unbounded in any direction and, even if Ω_n is bounded, the function u_n could be outside H^1_0(Ω_n). Moreover, an integration by parts yields 1=∫_^d|∇ u_n|^2 dx=-∫_^d u_nΔ u_n dx≤ C'(∫_^d(Δ u_n)^2 dx)^1/2, so ∫_^d(Δ u_n)^2 dx is also bounded from below away from zero; this implies that the infimum of Problem (<ref>) is strictly positive. Analogously, also u_n_L^2(^d) is larger than a positive constant and this avoids the degeneracy of the eigenfunctions u_n in L^2(^d). As already highlighted, due to the lack of a bounded design region D⊂⊂^d, we apply the concentration-compactness Lemma <cit.> to the sequence (|∇ u_n|^2)_n: There exists a subsequence (u_n_k)_k⊂ H^2(^d) such that one of the three following situations occurs. (i) Compactness. There exists a sequence of points (y_k)_k⊂^d such that ∀ε>0 ∃ R>0 s.t. ∫_B_R(y_k)|∇ u_n_k|^2 dx≥ 1-ε. (ii) Vanishing. For every R>0 lim_k→+∞(sup_y∈^d∫_B_R(y)|∇ u_n_k|^2 dx)=0. (iii) Dichotomy. There exists α∈]0,1[ such that, for every ε>0, there exist two bounded sequences (u^1_k)_k,(u^2_k)_k⊂ H^2(^d) such that lim_k→+∞dist(supp(u_k^1),supp(u_k^2))=+∞, lim_k→+∞∫_^d|∇ u^1_k|^2 dx=α, lim_k→+∞∫_^d|∇ u^2_k|^2 dx=1-α, lim_k→+∞∫_^d[|∇ u_n_k|^2-(|∇ u^1_k|^2+|∇ u^2_k|^2)] dx≤ε, lim inf_k→+∞∫_^d[(Δ u_n_k)^2-((Δ u^1_k)^2+(Δ u^2_k)^2)] dx≥ 0. Let us show that only compactness does occur. * Vanishing does not occur. Let us assume that vanishing occurs. Then every possible translation of any partial derivative ∂ u_n_k/∂ x_j weakly converges to 0 in L^2(^d). Indeed, in view of (<ref>), for any ϕ∈ C^∞_c(^d), with supp(ϕ) contained in some closed ball B_R(y), we have |∫_^d∂ u_n_k/∂ x_jϕ dx|≤(∫_B_R(y)|∇ u_n_k|^2 dx)^1/2(∫_B_R(y)ϕ^2 dx)^1/2→ 0. Now, since ∫_^d|∇ u_n_k|^2 dx=1, there exists (up to permutations) l∈{1,…,d} such that ∫_^d(∂ u_n_k/∂ x_l)^2dx≥1/d. Since u_n_k belongs to H^2(^d), by two integration by parts we get, for any i,h∈{1,…,d}, ∫_^d∂^2 u_n_k/∂^2 x_i∂^2 u_n_k/∂^2 x_h dx=∫_^d(∂^2 u_n_k/∂ x_i∂ x_h)^2dx. Then ∫_^d(Δ u_n_k)^2 dx =∑_i,h=1^d∫_^d(∂^2 u_n_k/∂ x_i∂ x_h)^2dx ≥∑_i=1^d∫_^d(∂^2 u_n_k/∂ x_i∂ x_l)^2dx=∫_^d|∇(∂ u_n_k/∂ x_l)|^2dx. We thus obtain, by recalling (<ref>) Λ̃_1(Ω_n_k)>∫_^d(Δ u_n_k)^2 dx-1/n_k≥1/d∫_^d|∇(∂ u_n_k/∂ x_l)|^2dx/∫_^d(∂ u_n_k/∂ x_l)^2dx-1/n_k. We now apply Lemma <ref> to the sequence (∂ u_n_k/∂ x_l)_k. Any translation of ∂ u_n_k/∂ x_l weakly converges to 0 in L^2(^d) and in H^1(^d) as well (every translation of u_n_k is bounded in H^2(^d), so every translation ∂ u_n_k/∂ x_l is weakly convergent in H^1(^d)). In view of Lemma <ref>, (∂ u_n_k/∂ x_l)_k can not be bounded in H^1(^d) and we get ∫_^d|∇(∂ u_n_k/∂ x_l)|^2dx→+∞, that is in contradiction with (<ref>), since (Λ̃_1(Ω_n_k))_k is a minimizing sequence for Problem (<ref>). * Dichotomy does not occur. Let us suppose that dichotomy occurs. Let α∈]0,1[ as in the statement of the dichotomy case (iii) and let ε>0. The sequences (u_k^1)_k,(u_k^2)_k∈ H^2(^d) can be chosen as follows, see <cit.> I.1 and <cit.>. Let ϕ∈ C^∞_c(^d;[0,1]) such that ϕ≡ 1 in B_1(0) and ϕ≡0 in ^d∖ B_2(0). Let (r_k)_k,(ρ_k)_k⊂_+ two diverging sequences and define for any x∈^d u_k^1(x):=ϕ(x/r_k)u_n_k(x), u_k^2(x):=(1-ϕ(x/ρ_k r_k))u_n_k(x). Notice that supp(u_k^1)⊆Ω_n_k∩ B_2r_k(0), supp(u_k^2)⊆Ω_n_k∖ B_ρ_k r_k(0) (so, that choice satisfies (<ref>)). In view of the previous choice, by using (<ref>), (<ref>), (<ref>) and the inequality a_1+a_2/b_1+b_2≥min{a_1/b_1,a_2/b_2} ∀ a_1,a_2,b_1,b_2>0, we have (possibly switching u_k^1 and u_k^2) inf_Ω∈𝒜̃_pΛ̃_1(Ω) =lim_k→+∞∫_^d(Δ u_n_k)^2 dx/∫_^d|∇ u_n_k|^2 dx≥lim sup_k→+∞∫_^d[(Δ u_k^1)^2+(Δ u_k^2)^2] dx/ε+∫_^d(|∇ u_k^1|^2+|∇ u_k^2|^2) dx ≥lim sup_k→+∞∫_^d(Δ u_k^1)^2 dx/ε+∫_^d|∇ u_k^1|^2 dx=lim sup_k→+∞∫_^d(Δ u_k^1)^2 dx/∫_^d|∇ u_k^1|^2 dx·∫_^d|∇ u_k^1|^2 dx/ε+∫_^d|∇ u_k^1|^2 dx =α/ε+αlim sup_k→+∞∫_^d(Δ u_k^1)^2 dx/∫_^d|∇ u_k^1|^2 dx≥α/ε+αlim sup_k→+∞Λ̃_1(Ω_n_k∩ B_2r_k(0)). Now, let us define t_k:=(P(Ω_n_k)/P(Ω_n_k∩ B_2r_k(0)))^1/d-1. Clearly, t_k≥ 1; by using the scaling property of the perimeter we have P(t_k(Ω_n_k∩ B_2r_k(0))) =t_k^d-1P(Ω_n_k∩ B_2r_k(0)) =P(Ω_n_k)/P(Ω_n_k∩ B_2r_k(0))P(Ω_n_k∩ B_2r_k(0))=P(Ω_n_k)≤ p, i.e. the dilated set t_k(Ω_n_k∩ B_2r_k(0)) is admissible for Problem (<ref>). Moreover, in view of the scaling property of the weak eigenvalues, it holds Λ̃_1(Ω_n_k∩ B_2r_k(0))=t_k^2Λ̃_1(t_k(Ω_n_k∩ B_2r_k(0))). By using this equality in (<ref>) we get inf_Ω∈𝒜̃_pΛ̃_1(Ω) ≥α/ε+αlim sup_k→+∞t^2_kΛ̃_1(t_k(Ω_n_k∩ B_2r_k(0))) ≥α/ε+αinf_Ω∈𝒜̃_pΛ̃_1(Ω)lim sup_k→+∞t^2_k. We claim that lim sup_k→+∞t^2_k=δ> 1. We argue by contradiction. Let us suppose that lim sup_k→+∞t^2_k=1 and let us denote by (t_k_j)_j a subsequence of (t_k)_k such that lim_j→+∞t^2_k_j=lim sup_k→+∞t^2_k=1. We thus have lim_j→+∞P(Ω_n_k_j)=lim_j→+∞P(Ω_n_k_j∩ B_2r_k_j(0)). This implies, in view of (<ref>), that lim_j→+∞P(Ω_n_k_j∖ B_ρ_kr_k_j(0))=0 and so |Ω_n_k_j∖ B_ρ_kr_k_j(0)|→ 0. But this is a contradiction, since it would imply Λ̃_1(Ω_n_k_j∖ B_ρ_kr_k_j(0))→+∞, that is impossible in view of the estimate lim sup_k→+∞Λ̃_1(Ω_n_k∖ B_ρ_kr_k(0))≤lim sup_k→+∞∫_^d(Δ u_k^2)^2 dx/∫_^d|∇ u_k^2|^2 dx≤C/1-α, where we applied the second limit in (<ref>) and the fact that ∫_^d(Δ u_k^2)^2 dx is uniformly bounded by a positive constant in view of (<ref>). We conclude that (<ref>) is true. By (<ref>) we obtain inf_Ω∈𝒜̃_pΛ̃_1(Ω)≥α/ε+αδinf_Ω∈𝒜̃_pΛ̃_1(Ω) and then, in view of the arbitrariness of ε>0, we infer inf_Ω∈𝒜̃_pΛ̃_1(Ω)≥δinf_Ω∈𝒜̃_pΛ̃_1(Ω)>inf_Ω∈𝒜̃_pΛ̃_1(Ω), a contradiction. Since neither vanishing nor dichotomy occurs, we conclude that compactness takes place. Then, there exists a sequence (y_k)_k⊂^d such that, up to subsequences, u_n_k(·+y_k)_H^2(^d)≤ C and there exists u∈ H^2(^d) u_n_k(·+y_k)⇀ u weakly in H^2(^d), ∇ u_L^2(^d)=1. The equality ∇ u_L^2(^d)=1 comes from (<ref>), the arbitrariness of ε>0 and the weak lower semicontinuity of the L^2-norm of the gradient: 1-ε≤∇ u_L^2(^d)≤lim inf_k→+∞∇ u_n_k(·+y_k)_L^2(^d)=lim inf_k→+∞∇ u_n_k_L^2(^d)=1, Then, since u_n_k(·+y_k)→ u strongly in L^2(^d) and lim_k→+∞∇ u_n_k(·+y_k)_L^2(^d)=1=∇ u_L^2(^d), we deduce that u_n_k(·+y_k)→ u strongly in H^1(^d). We now prove that there exists a measurable set Ω̂⊂^d such that |Ω̂|<∞, P(Ω̂)≤ p, u=0 a.e. in Ω̂^c, in order to use u∈H̃^2_0(Ω̂) as a test function for Λ̃_1(Ω̂). For any j∈, let us define the set Ω̂^(j) as the limit in measure of the sequence (Ω_n_k-y_k)_k in B_j(0). Notice that P(Ω̂^(j);B_j(0))≤lim inf_k→+∞P(Ω_n_k-y_k;B_j(0))≤ p. Let us define Ω̂:=⋃_j∈Ω̂^(j). We remark that the above union is increasing and that P(Ω̂;B_j(0))=P(Ω̂^(j);B_j(0)) ∀ j∈; then P(Ω̂)=lim_j→+∞P(Ω̂;B_j(0))=lim_j→+∞P(Ω̂^(j);B_j(0))≤ p and |Ω̂|<∞, i.e. Ω̂∈𝒜̃_p is an admissible set for Problem (<ref>). It remains to prove that u=0 a.e. in Ω̂^c. Let us fix ε>0. Since u_n_k(·+y_k)→ u strongly in L^2(^d) and, for any j∈, |(Ω_n_k-y_k∩ B_j(0))∖Ω̂^(j)|→0, then, for k∈ sufficiently large, it holds ∫_(Ω_n_k-y_k∩ B_j(0))∖Ω̂^(j)u^2_n_k(·+y_k) dx<ε. Therefore ∫_B_j(0)∖Ω̂u^2 dx =∫_B_j(0)∖Ω̂^(j)u^2 dx≤lim inf_k→+∞∫_B_j(0)∖Ω̂^(j)u^2_n_k(·+y_k) dx =lim inf_k→+∞∫_(Ω_n_k-y_k∩ B_j(0))∖Ω̂^(j)u^2_n_k(·+y_k) dx≤ε. We thus conclude that ∫_B_j(0)∖Ω̂u^2 dx=0 ∀ j∈ and then, by taking the supremum over j∈, ∫_^d∖Ω̂u^2 dx=0, which proves that u=0 a.e. in Ω̂^c. Then, since u_n_k(·+y_k)→ u strongly in H^1(^d) as proved above, recalling (<ref>) we finally have Λ̃_1(Ω̂)≤∫_^d(Δ u)^2 dx/∫_^d|∇ u|^2 dx≤lim inf_k→+∞∫_^d(Δ u_n_k)^2 dx/lim_k→+∞∫_^d|∇ u_n_k|^2 dx=lim inf_k→+∞∫_^d(Δ u_n_k)^2 dx/∫_^d|∇ u_n_k|^2 dx=inf_Ω∈𝒜̃_pΛ̃_1(Ω), concluding the theorem. The choice of the weaker framework of measurable sets leads us to choose a different approach to the connectedness of the admissible sets: indeed, given Ω∈𝒜̃_p, for every `cracked version' Ω' of Ω one has Λ̃_1(Ω')=Λ̃_1(Ω); in particular, the same equality holds if we consider cracks splitting the set in two connected components lying at zero distance, e.g. if Ω⊂^3 is a ball and Ω' is obtained by removing from Ω a maximal disk. In other words, it does not make sense to talk about connected components in the canonical sense, even for open sets. We point out that the only connected components that can be treated in a classical way are those lying at positive distance (since Λ̃_1(Ω) is not invariant under relative translations of connected components, unless they remain at positive distance). In view of that, we need to introduce the following definition. Let A,B⊆ℝ^d. We say that A and B are well separated if there exist two open sets E_A,E_B and two negligible sets N_A⊂ A, N_B⊂ B such that (A∖ N_A)⊆ E_A, (B∖ N_B)⊆ E_B, dist(E_A,E_B)>0. As we expected from the concentration-compactness argument in Theorem <ref>, it can not happen that the optimal set is split in two or more well separated set of positive measure (dichotomy does not occur). We now show that every optimal set for Problem (<ref>) is `connected' in a generalized sense. The proof follows a standard argument for counting the connected components in shape optimization, with the only difference that in our framework Λ̃_1 is an infimum and not a minimum, in general. Every solution Ω of Problem (<ref>) is `connected' in the sense of Definition <ref>, i.e. if Ω is union of well separated sets, only one has positive Lebesgue measure. Let us suppose that Ω=Ω_1∪Ω_2, where Ω_1 and Ω_2 are well separated sets of positive measure. Let ε>0 and let u^ε∈H̃^2_0(Ω) an ε-eigenfunction for Λ̃_1(Ω). Let us define u^ε_1:= u^ε in Ω_1 0 in Ω_1^c , u^ε_2:= u^ε in Ω_2 0 in Ω_2^c. Since Ω_1 and Ω_2 lie at positive distance, then u^ε_1∈H̃^2_0(Ω_1) and u^ε_2∈H̃^2_0(Ω_2) and so they can be used as test functions for Λ̃_1(Ω_1) and Λ̃_1(Ω_2) respectively. In view of the choice of u^ε, by (<ref>) we have Λ̃_1(Ω)+ε >∫_Ω_1(Δ u_1^ε)^2 dx+∫_Ω_2(Δ u_2^ε)^2 dx/∫_Ω_1|∇ u_1^ε|^2 dx+∫_Ω_2|∇ u_2^ε|^2 dx ≥min{∫_Ω_1(Δ u_1^ε)^2 dx/∫_Ω_1|∇ u_1^ε|^2 dx,∫_Ω_2(Δ u_2^ε)^2 dx/∫_Ω_2|∇ u_2^ε|^2 dx}≥min{Λ̃_1(Ω_1),Λ̃_1(Ω_2)} where we used inequality (<ref>). In view of the arbitrariness of ε>0, either Λ̃_1(Ω_1)≤Λ̃_1(Ω) or Λ̃_1(Ω_2)≤Λ̃_1(Ω). Let us suppose the first case; then, dilating Ω_1 by a factor t>1 in such a way that P(tΩ_1)=p, we get Λ̃_1(tΩ_2)<Λ̃_1(Ω) contradicting the minimality of Ω. The previous result about the generalized connectedness of the optimal measurable sets can be proven identically even if we replace the perimeter constraint with the measure constraint. Once we have assured the existence of optimal shapes in this weak setting, we would like to show that weak solutions are in fact open solutions. To this aim, we follow an approach proposed in <cit.>, introducing the following definition. Let Ω⊂^d a set of finite perimeter. We say that Ω is a perimeter supersolution if |Ω|<+∞ and, for every Ω̃⊃Ω of finite perimeter, we have P(Ω̃)≥ P(Ω). The following result is immediate. If Ω⊂^d is a solution for Problem (<ref>), then Ω is a perimeter supersolution. Let Ω̃⊂^d be a set of finite perimeter such that Ω̃⊃Ω. In view of the decreasing monotonicity of Λ̃_1(·), it holds Λ̃_1(Ω̃)≤Λ̃_1(Ω). On the other hand, by using the equivalent penalized version of Problem (<ref>), i.e. Problem (<ref>), in view of the optimality of Ω for some β>0 we obtain Λ̃_1(Ω)+β P(Ω)≤Λ̃_1(Ω̃)+β P(Ω̃)≤Λ̃_1(Ω)+β P(Ω̃), i.e. P(Ω)≤ P(Ω̃). Perimeter supersolutions enjoy good properties for our purposes; one of those is the following density estimate. Let Ω⊂^d a set of finite perimeter. We say that Ω satisfies an exterior density estimate if there exists a positive dimensional constant c=c(d) such that, for every x∈^d, one of the following situations occurs: (i) there exists r>0 such that B_r(x)⊂Ω a.e.; (ii) for every r>0, it holds |B_r(x)∩Ω^c|≥ c|B_r(x)|. The next results link the previous density estimate with the perimeter supersolutions, ensuring that there exist open optimal shapes for Problem (<ref>) and that such an open solution is, in fact, a solution for Problem (<ref>). The proof of the following proposition is omitted, as it can be found in <cit.>. Let Ω⊂^d be a perimeter supersolution. Then, Ω satisfies an exterior density estimate. In particular, if Ω⊂^d is a solution of (<ref>), then Ω satisfies an exterior density estimate. The following crucial result states that the measure theoretic interior Ω_1 of a perimeter supersolution Ω is an open set and that the test space H̃^2_0(Ω) is, in fact, the classical Sobolev space H^2_0(Ω_1). The proof is based on <cit.>, where the authors show the equality between the spaces H̃^1_0(Ω) and H^1_0(Ω_1). Let Ω⊂^d a set of finite perimeter satisfying an exterior density estimate. Then, the set of the points of density 1 for Ω Ω_1={x∈^d:∃ lim_r→ 0^+|Ω∩ B_r(x)|/|B_r(x)|=1} is open. In particular, for every perimeter supersolution Ω, Ω_1 is open. Moreover, it holds H̃^2_0(Ω)=H^2_0(Ω_1) and, in particular, Λ̃_1(Ω)=Λ̃_1(Ω_1)=Λ_1(Ω_1). The fact that Ω_1 is open follows from the exterior density estimate. To show the equality H̃^2_0(Ω)=H^2_0(Ω_1) it is sufficient to prove that H̃^2_0(Ω)⊆ H^2_0(Ω_1). Moreover, since |ΩΔΩ_1|=0, the equality H̃^2_0(Ω)=H̃^2_0(Ω_1) holds and so we prove the inclusion H̃^2_0(Ω_1)⊆ H^2_0(Ω_1). Let u∈H̃^2_0(Ω_1), so, in particular u∈ H^1(^d), u=0 a.e. in Ω^c_1 and thus u∈H̃^1_0(Ω_1). By <cit.>, since Ω is a perimeter supersolution, we have that H^1_0(Ω_1)=H̃^1_0(Ω_1) and so u∈ H^1_0(Ω_1). Then, by Proposition <ref>, we have u=0 everywhere in Ω_1^c and so, in view of Proposition <ref>, we get ∇ u=0 a.e. in Ω_1^c. Moreover, D_j u∈ H^1(^d) for any j=1,…, d, but this implies that D_j u ∈H̃^1_0(Ω_1)=H^1_0(Ω_1) and so that ∇ u=0 everywhere in Ω_1^c. By Proposition <ref> we get that u∈ H^2_0(Ω_1). Now we are ready to show that weak minimizers are equivalent to minimizing open sets for Problem (<ref>). Problem (<ref>) admits an open solution. In particular, every solution of Problem (<ref>) is equivalent to a solution of Problem (<ref>), in the sense that if Ω⊂^d is an open set solving Problem (<ref>), then it solves also Problem (<ref>) and, on the other hand, if Ω⊂^d is a set of finite perimeter solving Problem (<ref>), then Ω_1 is an open set solving Problem (<ref>). The existence of an open solution for Problem (<ref>) is assured by the fact that, for any solution Ω⊂^d of Problem (<ref>), the set Ω_1⊂^d is open (Proposition <ref>) and admissible for (<ref>). Indeed P(Ω_1)=P(Ω) since |ΩΔΩ_1|=0; moreover, H̃^2_0(Ω_1)=H̃^2_0(Ω), so Λ̃_1(Ω_1)=Λ̃_1(Ω)=inf_Ω∈𝒜̃_pΛ̃_1(Ω). Let us show now the equivalence between Problem (<ref>) and Problem (<ref>). Let Ω⊂^d be a minimizer for Problem (<ref>). If it was not a minimizer for Problem (<ref>), there would exist a solution A∈𝒜̃_p for Problem (<ref>) such that Λ̃_1(A)<Λ̃_1(Ω). On the other hand, since A is also a perimeter supersolution, A_1 is an open set admissible for Problem (<ref>) and so Λ̃_1(Ω)≤Λ_1(Ω)≤Λ_1(A_1)=Λ̃_1(A), a contradiction. On the other hand, if Ω∈𝒜̃_p is a solution for Problem (<ref>), then for any open set A∈𝒜_p one has Λ_1(Ω_1)=Λ̃_1(Ω)≤Λ̃_1(A)≤Λ_1(A), getting the minimality of Ω_1 for Problem (<ref>). It is a straightforward consequence of Proposition <ref>, Theorem <ref>, Proposition <ref> and Theorem <ref>. § EXISTENCE OF OPTIMAL SHAPES FOR THE HIGHER BUCKLING EIGENVALUES: THE CASE OF CONVEX SETS The existence of optimal shapes for higher eigenvalues needs a more careful investigation in the framework of sets of finite perimeter. It seems necessary an analysis of the boundedness of minimizers in order to apply an inductive concentration compactness argument as applied for instance in <cit.>. At the moment we are not able to get the required boundedness of the optimal sets since for these fourth order problems the common tools of surgery introduced in the H^1-setting fail (to get an overview for the Dirichlet-Laplace eigenvalues see, for instance, <cit.> for the problem with perimeter constraint or <cit.> for the problem with volume constraint). Nevertheless, Theorem <ref> ensures the existence of minimizers for higher eigenvalues in the framework of convex sets. The variational argument used to prove this result is based on a standard application of the direct methods which is inspired by previous works in which higher eigenvalues for the Laplace operator are minimized among convex sets (see, for instance, <cit.> for the case of Robin eigenvalues or <cit.> for the Dirichlet case). In order to apply the direct methods of the Calculus of variations we show that Λ_h is lower semicontinuous with respect to the Hausdorff convergence. Let (Ω_n)_n be a sequence of open convex sets converging to an open, non empty, convex set Ω in the Hausdorff topology and let Ω_n,Ω be contained in a compact set D⊂^d. Then, for every k∈, Λ_h(Ω)≤lim inf_n→+∞Λ_h(Ω_n). Without loss of generality, we can assume sup_n∈Λ_h(Ω_n)<+∞. Let V^n⊂ H^2_0(Ω_n) be an admissible h-dimensional space for the computation of Λ_h(Ω_n) such that Λ_h(Ω_n)=max_V^nR_Ω_n. Let {u_1^n,…,u_h^n}⊂ H^2_0(Ω_n) a H^1_0(Ω_n)-orthonormal basis for V^n; for every i=1,…,h it holds ∫_Ω_n(Δ u_i^n)^2 dx=R_Ω_n(u_i^n)≤max_V^nR_Ω_n=Λ_h(Ω_n)<C. Then, sup_nu_i^n_H^2_0(Ω_n)=sup_nu_i^n_H^2_0(D)<+∞ for every i=1,…,h. So, for every i=1,…,h, there exists u_i∈ H^2_0(D) such that u_i^n⇀ u_i in H^2_0(D). Moreover, u_i^n→ u_i in H^1(D) and Ω_n→Ω also in measure, so u_1,…,u_h∈ H^1(Ω). Notice that u_1,…,u_h are linearly independent in H^2_0(Ω), since Ω_n converges to Ω also in measure; hence, the linear space V:=span{u_1,…,u_h} is a competitor for the computation of Λ_h(Ω). Let w=∑α_iu_i realizing the maximum of the Rayleigh quotient R_Ω(·) on V and let w_n:=∑α_iu_i^n∈ V^n. Let us observe that the Dirichlet integral at the denominator converges and the numerator is lower semicontinuous and so the Rayleigh quotient is lower semicontinuous as well. Since w_n∈ V^n, we conclude that Λ_h(Ω) ≤max_V R_Ω=R_Ω(w)≤lim inf_n→+∞R_Ω_n(w_n)≤lim inf_n→+∞max_V^nR_Ω_n =lim inf_n→+∞Λ_h(Ω_n), obtaining the required lower semicontinuity of the buckling eigenvalues. Now we are able to prove Theorem <ref>. In order to apply the direct methods of the Calculus of Variations, we need a compactness property for a minimizing sequence (Ω_n)_n. To this aim, we just need a careful analysis about the non degeneracy and the uniform boundedness of the sequence (Ω_n)_n. Let (Ω_n)_n be a minimizing sequence of admissible open convex sets for Problem (<ref>) such that ℋ^d-1(∂Ω_n)=p. Let us show that, up to subsequences, Ω_n converges in the sense of Hausdorff (and then in measure) to a nonempty admissible open convex set Ω with ℋ^d-1(∂Ω)=p. Without loss of generality, up to translations and rotations, we can assume that diam(Ω_n)=ℋ^1(Ω_n∩{x_2=…=x_d=0}) and that min_i=2,…,d(max_Ω_nx_i-min_Ω_nx_i)=max_Ω_nx_d-min_Ω_nx_d i.e. the length of the one dimensional projection of Ω_n on the axis x_1 is equal to the diameter of Ω_n and the projection on the axis x_d has minimal length. We claim that sup_ndiam(Ω_n)<+∞ and that, up to subsequences, lim_n(max_Ω_nx_d-min_Ω_nx_d)>0. We prove (<ref>) arguing by contradiction. Let us suppose that the limit in (<ref>) is zero; in view of John's Ellipsoid Theorem <ref>, there exists an ellipsoid E_n such that, up to rotations and translations E_n⊆Ω_n⊆ d E_n. Then, since also the width of E_n vanishes, we have Λ_h(Ω_n)≥Λ_h(d E_n)=1/d^2Λ_h(E_n)≥1/d^2Λ_1(E_n)≥1/d^2λ_2(E_n)→+∞, against the minimality of Ω_n. Then (<ref>) holds. To prove that the diameters of the Ω_n sets are uniformly bounded, we argue again by contradiction. Let us suppose that the sequence of the diameters is unbounded. Since the sets Ω_n are convex and uniformly bounded in measure, the product ∏_j=1^d(max_Ω_nx_j-min_Ω_nx_j) has to be uniformly bounded. In view of our assumptions, as the diameter of Ω_n tends to infinity, necessarily the first term of the product diverges. We deduce that at least the smallest term among the remaining d-1 terms has to vanish. In other words, we have lim_n(max_Ω_nx_d-min_Ω_nx_d)=0, in contradiction with (<ref>). Then (Ω_n)_n is an equibounded sequence of convex sets. In view of Proposition <ref>(v), (Ω_n)_n converges (up to subsequences) in the sense of Hausdorff to a bounded convex set Ω; moreover, by Proposition <ref>(ii), the convergence is also in measure. In addition, thanks to (<ref>), the limit set Ω is not degenerate (i.e. it has positive measure) and ℋ^d-1(∂Ω)=lim_n→+∞ℋ^d-1(∂Ω_n)=p. In view of Proposition <ref> Ω is the required minimizer. We point out that in dimension d=2 Theorem <ref> proves the existence of open minimizers for Λ_h in the whole of ^d. Indeed, the problem min{Λ_h(Ω):Ω⊂^2 open, |Ω|<∞, P(Ω)≤ p} reduces to the minimization problem among convex sets with prescribed boundary length p. Moreover, if the perimeter constraint in Problem (<ref>) is replaced by a volume constraint, the existence of optimal shapes for problem min{Λ_h(Ω):Ω⊂^d open and convex, |Ω|≤ m}, can be proved by using the same arguments in this section. § FURTHER REMARKS AND OPEN PROBLEMS Once proved the existence of minimizers, some interesting questions arise about the regularity or the precise shape of the minimizers. For the first eigenvalue the two questions are related, as highlighted in <cit.> for the planar case with measure constraint: provided that an optimal shape is regular enough, it must coincide with the disk of given measure. In our framework we start from a better situation, since optimal planar sets are convex. In this case (and, more generally, in the case of Problem (<ref>)) it seems necessary at least to remove the possible corners to get more regularity of the boundary. Unfortunately, at the moment the cutting techniques that are known in the H^1-setting (see, for instance, <cit.> for the Dirichlet-Laplace eigenvalues, or <cit.> for the Robin-Laplace eigenvalues) do not seem to apply since they are based on surgery arguments that are not available in the H^2-setting. Another aspect which is worth to analyze is the regularity of the optimal sets in higher dimension. Due to the choice of the perimeter constraint, it would be interesting to understand if it was possible to see minimizers for Problem (<ref>) as quasi-minimizers of the perimeter in the sense of De Giorgi, as done in <cit.>, in order to obtain that optimal open sets have C^1,α boundary up to a singular set whose dimension is less then or equal to d-7. To this aim, it seems necessary to prove that optimal sets are bounded. Unfortunately, as highlighted at the beginning of Section <ref>, up to our knowledge, there are no available techniques to reach this goal at the moment. We conclude giving the following list of some open problems. Are optimal shapes for Problem (<ref>) smooth? Is it possible to remove the corners from the boundary of the convex minimizers in the planar case? Provided that an optimal shape for Problem (<ref>) is smooth enough, can we prove that it is a ball, at least in the planar case? Are optimal shapes for Problem (<ref>) bounded also in higher dimension? Do minimizers for Λ_h exist among open sets with prescribed perimeter in higher dimension? Are they bounded? *Declarations S.C. has been partially supported by the ACROSS project Cod. ARS01-00702. A.L. has been partially supported by the Italian M.U.R. PRIN: grant number 2017KC8WMB. The authors have no competing interests to declare that are relevant to the content of this article. adams1999functionbook title=Function spaces and potential theory, author=Adams, D.R. author=Hedberg, L. I., volume=314, year=1999, publisher=Springer Science & Business Media AFPbook author=Ambrosio, L., author=Fusco, N. author=Pallara, D., title=Functions of bounded variation and free discontinuity problems, series=Oxford Mathematical Monographs, publisher=The Clarendon Press, Oxford University Press, New York, date=2000, pages=xviii+434, AshBuc2003article title=On the isoperimetric inequality for the buckling of a clamped plate, author=Ashbaugh, S., author=Bucur, D., journal=Zeitschrift für angewandte Mathematik und Physik ZAMP, volume=54, number=5, pages=756–770, year=2003, publisher=Springer buconvexarticle title=Regularity of optimal convex shapes, author=Bucur, D., journal=Journal of Convex Analysis, volume=10, number=2, pages=501–516, year=2003, publisher=HELDERMANN VERLAG LANGER GRABEN 17, 32657 LEMGO, GERMANY bubbook title=Variational Methods in Shape Optimization Problems, author=Bucur, D., author=Buttazzo, G., year=2005, publisher=Birkhauser bbharticle, title=Minimization of λ_2(Ω) with a perimeter constraint, author=Bucur, D. author=Buttazzo, G. author=Henrot, A., journal=Indiana University mathematics journal, pages=2709–2728, year=2009, publisher=JSTOR bucvararticle title=Global minimizing domains for the first eigenvalue of an elliptic operator with non-constant coefficients, author=Bucur, D. author=Varchon, N., journal=Electronic Journal of Differential Equations, volume=36, year=2000, pages=1–10, publisher=Texas State University, Department of Mathematics citoarticle title=Existence and regularity of optimal convex shapes for functionals involving the robin eigenvalues, author=Cito, S., journal=J. Convex Anal, volume=26, pages=925–943, year=2019 devearticle title=Existence and regularity of minimizers for some spectral functionals with perimeter constraint, author=De Philippis, G. author=Velichkov, B., journal=Applied Mathematics & Optimization, volume=69, number=2, pages=199–231, year=2014, publisher=Springer evans2018measurebook title=Measure theory and fine properties of functions - Revised Edition, author=Evans, L. C. author=Gariepy, R. F., year=2015, publisher=CRC Press, Boca Raton, FL series = Textbooks in Mathematics, pages = xiv+299, ISBN = 978-1-4822-4238-6, GGSbook title=Polyharmonic boundary value problems: positivity preserving and nonlinear higher order elliptic equations in bounded domains, author=Gazzola, F., author=Grunau, H.-C., author=Sweers, G., year=2010, publisher=Springer Science & Business Media joh F. John: Extremum problems with inequalities as subsidiary conditions, in Studies and Essays Presented to R. Courant on his 60th Birthday, January 8, 1948, Interscience Publishers, Inc., New York, N. Y., pp. 187-204 (1948) lions84inproceedings title=The concentration-compactness principle in the Calculus of Variations. The locally compact case, part 1., author=Lions, P.-L., booktitle=Annales de l'Institut Henri Poincaré (C) Non Linear Analysis, volume=1, number=2, pages=109–145, year=1984, organization=Elsevier mazpraarticle title=Existence of minimizers for spectral problems, author=Mazzoleni, D. author=Pratelli, A., journal=Journal de Mathématiques Pures et Appliquées, volume=100, number=3, pages=433–453, year=2013, publisher=Elsevier sto16article title=Optimal shape of a domain which minimizes the first buckling eigenvalue, author=Stollenwerk, K., journal=Calculus of Variations and Partial Differential Equations, volume=55, number=1, pages=5, year=2016, publisher=Springer sto2021article title=On the optimal domain for minimizing the buckling load of a clamped plate, author=Stollenwerk, K., journal=arXiv preprint arXiv:2110.02545, year=2021
http://arxiv.org/abs/2307.01190v1
20230703175415
HNL see-saw: lower mixing limit and pseudodegenerate state
[ "Igor Krasnov" ]
hep-ph
[ "hep-ph" ]
[ Anna Skorobogatova August 1, 2023 ======================= § INTRODUCTION Heavy Neutral Leptons (HNL) are one of the primary candidates for the extension of the Standard Model (SM), originally suggested as a solution to the neutrino mass scale problem in a so-called seesaw mechanism <cit.> and since then found to be capable of contributing to the baryon asymmetry of the Universe <cit.> or playing the role of dark matter <cit.>. HNL are unchanged under SM gauge groups, serving as right-handed Majorana counterparts for neutrinos, which earns them their other name, sterile neutrinos. Our results can be applied to sterile neutrinos with masses ∼ eV that can explain reactor and gallium anomalies that continue to gather the interest of researchers <cit.>, but a comprehensive analysis of that area falls outside the scope of this work. We usually only consider scenarios where all sterile neutrinos have masses ≫ eV, and, therefore, prefer the usage of the term “heavy neutral leptons”. In this paper, we limit ourselves to seesaw type-I mechanism <cit.>. Its Lagrangian can be written as: ℒ=ℒ_SM + iN̅_Iγ^μ∂_μ N_I-(1/2 M_I N̅^c_I N_I + Ŷ_α IL̅_αH̃ N_I +h.c.), From the phenomenological point of view, HNL can be fully described by its mass M_I and mixing to the neutrino sector |U_α I|, where U = v/√(2)M_I^-1 Y, α∈{ e, μ, τ}. The original seesaw mechanism had an extremely large scale of HNL mass, close to Grand Unification scale M_I∼ 10^15 GeV, but it was shown that HNL with masses M_I ≲ 1 GeV but feeble mixing |U_α i| ≪ 1 can just as effectively serve that role (an overview of HNL searches is periodically done in the literature <cit.>). Mixing with the neutrino sector implies that, if kinematically allowed, HNL can be produced in weak interactions and later decay into SM particles. Therefore many collider and beam-dump experiments, such as E949<cit.>, NA62<cit.>, T2K<cit.>, TRIUMF<cit.>, PIENU<cit.> and others place limits on HNL mixing. Traditionally, the minimal seesaw mixing that is consistent with up-to-date active neutrino sector experimental data is taken to be a line |U_α i|^2 = m_ν/M_i, where active neutrino mass is approximated as m_ν≈√(Δ m_atm^2) <cit.>. Another approach is to consider specific scenarios where only certain elements of the mixing matrix are designated as important <cit.>. The goal of this article is to study the strict analytical minimal value for |U_α i|^2 and how it relates to existing and upcoming experimental bounds. We also find and study a specific limit, that, as we find, is usually realised near these experimental bounds, which we call the pseudodegenerate HNL case. In this article, we start with a general approach to HNL physics in section <ref>, continue with our study for the two HNL case in section <ref> and then move on to a more general three HNL case in section <ref>. We summarize our findings in the conclusion <ref> and provide some additional information on active neutrino mixing parameters and corresponding experimental data we use throughout the paper in an appendix <ref>. § STERILE SECTOR It is convenient to adopt Casas-Ibarra parametrization <cit.> of HNL mixing angle: U= v/√(2)M_I^-1 Y = iM_I^-1/2 R m_ν^1/2× diag{e^i α_1/2; e^i α_2/2; 1} U_PMNS^†. Here m_ν≡ diag{m_1, m_2, m_3 } and M_I ≡ diag{M_1, M_2, ... }. U_PMNS is Pontecorvo-Maki-Nakagawa-Sakata matrix and α_1, α_2 are active neutrino Majorana phases. Matrix R is a complex orthogonal matrix, R^T R =1. Active neutrinos gain their mass after the electroweak symmetry breaking via the seesaw mechanism (<ref>): m_ν = U M_I U^T. Because the HNL mass matrix has N eigenstates, mixing with HNL, in that case, provides N eigenvalues to the active neutrino mass matrix. Observed neutrino oscillation phenomena dictate that at least two active neutrinos have different nonzero masses, but don't fix neutrino mass scale <cit.>. Therefore, the minimal number of HNL that can explain all existing neutrino observations is two, which inevitably makes the lightest active neutrino massless. The case of three HNL is less restrictive on the active sector and is a bit more favored from the theoretical standpoint: for instance, many grand unification theories (such as SO(10) models, where HNL is also usually responsible for baryon asymmetry of the Universe  <cit.>) include the same number of right-handed neutrinos as they have left-handed neutrinos. It is possible to introduce four or more HNL into theory, but due to the overabundance of free parameters it is rarely done without the introduction of specific symmetries of some kind, and these cases fall outside the scope of this work. § 2 HNL CASE The neutrino mass hierarchy greatly affects two HNL case, as it determines which elements of U_PMNS matrix play a role in mixing with HNL. We provide more information on active neutrino sector parameters used in this work in the appendix <ref>. Matrix R takes slightly different forms depending on the hierarchy: m_1=0 ⇒ R= ( [ 0 c - ξ s; 0 s ξ c; ]); m_3=0 ⇒ R= ( [ c - ξ s 0; s ξ c 0; ]), where ξ = ± 1, c =cosω, s=sinω, and ω = x + i y ⊂ℂ is not restricted in any way. The resulting mixing matrix is: U = i ×( [ 1/√(M_1)Γ_11 1/√(M_1)Γ_12 1/√(M_1)Γ_13; 1/√(M_2)Γ_21 1/√(M_2)Γ_22 1/√(M_2)Γ_23 ]), where Γ_1i ≡ λ_1i c - λ_2i e^iψ s Γ_2i ≡ λ_1i s + λ_2i e^iψ c For normal hierarchy (m_1=0): λ_1i≡√(m_2) U^†_PMNS_2i; λ_2i≡√(m_3) U^†_PMNS_3i; e^i ψ= ± e^-iα_2/2 For inverted hierarchy (m_3=0): λ_1i≡√(m_1) U^†_PMNS_1i; λ_2i≡√(m_2) U^†_PMNS_2i; e^i ψ= ± e^iα_2-α_1/2 Phenomenologically, each HNL is usually described with its mass M_I and mixing to three neutrino flavors |U_Iα|^2. Matrix U can be multiplied by any complex phase without changing these expressions. In this way only one independent Majorana angle ψ remains, as described above, and multiplier i can be safely omitted. Two HNL case was extensively studied in literature <cit.>, and in this work, we only focus our attention on two aspects: the minimal HNL mixing with specified neutrino flavor and a specific case we call pseudodegenerate state. §.§ Lower limit for mixing with specific flavor We can choose parameters in such a way so that one of HNL doesn't mix with said flavor or even only one HNL would mix with a specified flavor at all. To get the minimal value above which must lay at least one mixing of HNL with a chosen flavor, we study the behaviour of the half-sum of mixing of two HNL (we start with the example of mixing with ν_e): U^2_e = 1/2 M_1 |Γ_11|^2 + 1/2 M_2 |Γ_21|^2. The extremum criteria are: ∂ U^2_e/∂ω = - 1/2 M_1Γ_11^*Γ_21 + 1/2 M_2Γ_21^*Γ_11 = 0 This equation can have three solutions: Γ_11 = 0 Γ_21 = 0 M_1 = M_2, |Γ_11| ≠ 0, |Γ_21| ≠ 0, Γ_11^*Γ_21 - Γ_11Γ_21^* = 0 Notice that |Γ_11^2+Γ_21^2| = |λ_11^2+λ_21^2e^2 i ψ| = |m_1 U^†^2_PMNS_11 e^iα_1 + m_2 U^†^2_PMNS_21 e^iα_2 + m_3 U^†^2_PMNS_31| doesn't depend on ω. The expression |m_ee| = |m_1 U^†^2_PMNS_11 e^iα_1 + m_2 U^†^2_PMNS_21 e^iα_2 + m_3 U^†^2_PMNS_31| itself is also a known entity that appears as an effective neutrino mass in neutrinoless double beta decay searches <cit.>. When Γ_11=0 it automatically means that |Γ_21|^2 = |Γ_11^2 + Γ_21^2| = |m_ee|. When Γ_21=0 it automatically means that |Γ_11|^2 = |Γ_11^2 + Γ_21^2| = |m_ee|. Therefore, the first two cases right away give us the answer for the minimal value U^2_e_min, M_1 ≠ M_2 = |m_ee|/2M_max, where M_max=max{M_1, M_2}. We check that for degenerate mass the result is the same. If M_1 = M_2 ≡ M the mixing is: U^2_e = 1/2M(|Γ_11|^2 + |Γ_21|^2) = 1/2M(|λ_11|^2 + |λ_21|^2) cosh(2y) +2 [ λ_11^* λ_21 e^iψ] sinh(2 y)) We see that U_e^2 doesn't depend on x anymore and call the value that satisfies the ∂ U_e^2/∂ y=0 extremum condition “y_e”: (|λ_11|^2 + |λ_21|^2) sinh(2 y_e) + 2 [ λ_11^* λ_21 e^iψ] cosh(2 y_e) = 0 Mixing U_e^2 can be, therefore, written as: |U_e|^2 = 1/2M√((|λ_11|^2 + |λ_21|^2)^2 -4 ([ λ_11^* λ_21 e^iψ])^2 )cosh(2(y-y_e)) = |m_ee|/2Mcosh(2(y-y_e)) The minimum is achieved at y=y_e, providing us with the same result: U^2_e_min = 1/2 M_max |m_1 U^†^2_PMNS_11 e^iα_1 + m_2 U^†^2_PMNS_21 e^iα_2 + m_3 U^†^2_PMNS_31| ≡|m_ee|/2M_max This expression holds true for both normal hierarchy (m_1=0) and inverted hierarchy (m_3=0). If HNL doesn't have a degenerate mass term, minimal mixing is achieved when only one HNL is mixing with a flavor (the heavier one, |U|^2_Ie = |m_ee|/M_I, U^2_ie = 0, i ≠ I). In a degenerate mass case, the mixing can be distributed between HNL, so the best we can say is that at least one mixing is guaranteed to lay above the half-sum line. That allows us to draw U_e_min^2(M_max) line that effectively serves as a conservative seesaw lower limit. For the other two flavors, one finds similar expressions: U^2_μ_min = 1/2M_max |m_1 U^†^2_PMNS_12 e^iα_1 + m_2 U^†^2_PMNS_22 e^iα_2 + m_3 U^†^2_PMNS_32| ≡|m_μμ|/2M_max U^2_τ_min = 1/2M_max |m_1 U^†^2_PMNS_13 e^iα_1 + m_2 U^†^2_PMNS_23 e^iα_2 + m_3 U^†^2_PMNS_33| ≡|m_ττ|/2M_max Expressions |m_μμ| and |m_ττ|, much like |m_ee|, can appear in rare lepton number violating decays, such as K^- →π^+ μ^- μ^- or B^- → K^+ τ^- τ^- and other Δ L =2 processes <cit.>. These decays become possible only in the presence of Majorana neutrino mass term and, therefore, don't have direct Standard Model analogues, but the process of their reconstruction, especially in the case of short-lived tau-mesons in the final state, poses an experimental challenge. §.§ Pseudodegenerate state In the previous subsection, we studied the minimal value U_α_min^2 achievable for mixing with a specific flavor α. In this subsection, we study the relations of mixing of different HNL with the same flavor. We also study the mixing of a specific HNL with different flavors in two distinct cases: close to minimal mixing value |m_αα|/2M and for much greater values of mixing. It is convenient to introduce notional masses ℳ_i α = M_i |U_i α|^2. From one point of view, these are potentially experimentally restricted expressions that only depend on HNL mixing and mass. From other point of view, ℳ_i e = |Γ_i 1|^2, ℳ_i μ = |Γ_i 2|^2, ℳ_i τ = |Γ_i 3|^2, depend only on ω, ψ, δ and known active neutrino parameters. These values are closely connected: ℳ_1 α + ℳ_2 α = |Γ_1 α|^2 + |Γ_2 α|^2 = (|λ_1 α|^2 + |λ_2 α|^2) cosh(2y) + 2 [ λ_1 α^* λ_2 α e^iψ] sinh(2 y) = |m_αα| cosh (2 (y - y_α)), ℳ_2 α - ℳ_1 α = |Γ_1 α|^2 - |Γ_2 α|^2 = (|λ_1 α|^2 - |λ_2 α|^2) cos(2x) - 2 [ λ_1 α^* λ_2 α e^iψ] sin(2 x) = |m_αα| cos (2 (x - x_α)), One can notice that for all x,y: |ℳ_2 α - ℳ_1 α| ≤ |m_αα| ℳ_1 α + ℳ_2 α ≥ |m_αα| One can see the available region of (ℳ_1 α, ℳ_2 α) plane limited by these inequations on the left panel of figure <ref>. That promotes us to study an important limit: |ℳ_2 α -ℳ_1 α| ≪ℳ_1 α + ℳ_2 α In this limit ℳ_1 α≈ℳ_2 α. As such, equation (<ref>) can be solved for variable y (as a function of, for example, ℳ_1 e): sinh(2 y) = - 2 ℳ_1 e 2 [ λ_11^* λ_21 e^iψ] ± (|λ_11|^2 + |λ_21|^2) √(4 ℳ_1 e^2 - |m_ee|^2)/|m_ee|^2 cosh(2 y) = 2 ℳ_1 e (|λ_11|^2 + |λ_21|^2) ± 2 [ λ_11^* λ_21 e^iψ] √(4 ℳ_1 e^2 - |m_ee|^2)/|m_ee|^2 Here, we can notice an important fact. Our seesaw limit (<ref>) in terms of notional mass simply turns into expression ℳ_i_maxα≥|m_αα|/2. We have drawn the current experimental limits, as well as the allowed regions of |m_ee| for both hierarchies (and m_lightest=0) on the right panel of figure <ref>. The mixing with electronic neutrino has stricter experimental bounds compared to mixing with two other flavors, so we present only it here. From this figure, one can conclude that current experimental limits lay at least a magnitude of order higher than the value of |m_ee|. Therefore, we can consider a bit stricter limit than (<ref>): ℳ_1 α + ℳ_2 α≫ |m_αα| ≥ |ℳ_2 α - ℳ_1 α| In this limit (<ref>) and (<ref>) take even simpler form: sinh(2 y) ≈ - 2 ℳ_1 e 2 [ λ_11^* λ_21 e^iψ] ± (|λ_11|^2 + |λ_21|^2)/|m_ee|^2≈∓cosh(2y) As a result, one can write: |U_i α|^2/|U_i tot|^2 = |λ_1α|^2 + |λ_2α|^2 ∓ 2 [ λ_1α^* λ_2α e^iψ]/∑_j=1^3 (|λ_1j|^2 + |λ_2j|^2 ∓ 2 [λ_1j^* λ_2j e^iψ]) where |U_i tot|^2 = |U_i e|^2 + |U_i μ|^2 + |U_i τ|^2. These expressions are the same for both HNL and don't depend on ω or even its notional mass, only on CP-violating phases δ and ψ. Notice, mixing |U_i α^2| can turn exactly to zero only when |m_αα|=0, which can't be achieved for m_lightest=0. That means that both HNL mix with each neutrino flavor, even if mixing with some flavors is greatly suppressed. We study the behaviour of |m_αα| for nonzero values of m_lightest more extensively in the three-HNL case. In limit (<ref>) expression (<ref>) take somewhat more cumbersome form: |U_i α|^2/|U_i tot|^2 = ((|λ_11|^2 + |λ_21|^2)(|λ_1α|^2 + |λ_2α|^2 ) - 4[ λ_11^* λ_21 e^iψ] [ λ_1α^* λ_2α e^iψ] ± 2 κ_i ([ λ_11^* λ_21 e^iψ] (|λ_1α|^2 + |λ_2α|^2 ) - (|λ_11|^2 + |λ_21|^2) [ λ_1α^* λ_2α e^iψ]) )× ((|λ_11|^2 + |λ_21|^2)∑_j=1^3(|λ_1j|^2 + |λ_2j|^2) - 4[λ_11^* λ_21 e^iψ] [∑_j=1^3(λ_1j^* λ_2j) e^iψ] ± 2 κ_i ([ λ_11^* λ_21 e^iψ] ∑_j=1^3(|λ_1j|^2 + |λ_2j|^2) - (|λ_11|^2 + |λ_21|^2) [∑_j=1^3(λ_1j^* λ_2j) e^iψ]) )^-1 where κ_i = √(1 - |m_ee|^2/4ℳ_i e^2), 0< κ < 1. Limit (<ref>) corresponds to κ→ 1, defining values ℳ_i e much greater than their minimal value. We have drawn the available ratios of mixing in figure <ref>. In that figure we use notion U_α^2/<U>^2 = U_1 α^2/U_1 tot^2 = U_2 α^2/U_2 tot^2. One can see that the allowed regions “shrink down” if one decreases the value of κ. Therefore, results obtained for limit (<ref>) can serve as a conservative boundary for limit (<ref>) as well. Note, unlike limit (<ref>) two HNL can have different values of κ, making the ratio U_e^2:U_μ^2:U_τ^2 differ between them. Our results for κ = 1 mirror the results obtained in the much stricter “symmetric limit” <cit.>: M_1=M_2, |U_α1|^2=|U_α2|^2, both in pictures and in formulas. Symmetric limit is a specific case of our limit (<ref>) (ℳ_1 e=ℳ_2 e) that we call pseudodegenerate. We state that the ratio U_e:U_μ:U_τ is defined by the expression (<ref>) and is the same for both HNL in this limit. In other words, HNL doesn't need to be completely degenerate in their mass and mixing at the same time, instead for the mixing ratio to take these forms it is enough to have equal notional masses. That means that should we find any evidence of HNL mixing with one flavor, we would simultaneously get a hint of the preferable (for two HNL scenario) mixing range with the other two flavors as well as line |U_2 α|^2 = M_1 /M_2 |U_1 α|^2 as a function of M_2 along which we should search for evidence of the other HNL. On a side note, if one scans values for ℳ_1 e≪ℳ_2 e and ℳ_1 e≫ℳ_2 e one obtains not truncated “general case” areas figuring in literature <cit.>. Because notional masses are restricted by expression (<ref>), these “extra bits” inevitably are defined by the areas ℳ_1 e≲ |m_ee| and ℳ_2 e≲ |m_ee|, therefore being of little interest for HNL searches in the near future. We refer most of the further study of pseudodegenerate limit, such as the preferable areas for best fit δ value and so forth, to the already performed study of symmetrical limit <cit.>. Additionally, we have checked the dependence of minimal allowed values of |U_i tot^2| on the mass M_i in the presence of relevant experimental limits. Unremarkably, the experimental limits at the moment are too weak and far removed from the area of minimally allowed values. Therefore these lines of minimally allowed values simply take the shape |U_i tot^2|_min = m_j/M_i, where m_j = m_2 for normal hierarchy and m_j = m_1 for inverted hierarchy. To summarize our results for the case of two HNLs, we obtained that the minimal value of mixing with a given flavor α∈{ e, μ, τ} can be expressed as |m_αα|/2 M_HNL. Figure <ref> shows us that the minimally allowed values of mixing are, most likely, out of reach of experiments conducted in the near future. In turn, that means that should any evidence of HNL be found, two HNL scenario predicts with a high level of precision that ℳ_1 α = ℳ_2 α≫ |m_αα| and that ratio of mixing with different flavors is governed by expression (<ref>) and is the same for both HNLs. This would allow one to greatly restrict the mixing of the second HNL with said flavor and the mixing of the first HNL with other flavors. § 3 HNL CASE In this section, we show that similar results can be achieved for the three HNL case. We use the following parametrization of R: R= diag{± 1, ± 1, ± 1}×( [ c_2 c_1 c_2 s_1 s_2; -c_3 s_1 - s_3 s_2 c_1 c_3 c_1 - s_3 s_2 s_1 s_3 c_2; s_3 s_1 - c_3 s_2 c_1 -s_3 c_1 - c_3 s_2 s_1 c_3 c_2; ]), where c_i =cos z_i, s_i =sin z_i, i=1,3, and z_i ⊂ℂ are not restricted in any way. To further simplify our calculations, we are going to use the following variables much the same way we did it for the two HNLs case: A ≡ diag{e^i α_1/2; e^i α_2/2; 1}× U_PMNS^† λ_1i ≡ √(m_1) A_1i c_1 + √(m_2) A_2i s_1 λ_2i ≡ √(m_3) A_3i λ_3i ≡ - √(m_1) A_1i s_1 + √(m_2) A_2i c_1 Γ_1i ≡ λ_1i c_2 + λ_2i s_2 Γ_4i ≡ λ_2i c_2 - λ_1i s_2 Γ_2i ≡ λ_3i c_3 + Γ_4 s_3 Γ_3i ≡ Γ_4 c_3 - λ_3i s_3 One can note that λ_1i^2 + λ_3i^2 = m_1 A_1i^2 + m_2 A_2i^2 does not depend on z_1. In the same way Γ_1i^2 + Γ_4i^2 = λ_1i^2 + λ_2i^2 does not depend on z_2 and Γ_2i^2 + Γ_3i^2 = Γ_4i^2 + λ_3i^2 does not depend on z_3. The mixing matrix can be expressed as: U = i × diag{± 1, ± 1, ± 1}×( [ 1/√(M_1)Γ_11 1/√(M_1)Γ_12 1/√(M_1)Γ_13; 1/√(M_2)Γ_21 1/√(M_2)Γ_22 1/√(M_2)Γ_23; 1/√(M_3)Γ_31 1/√(M_3)Γ_32 1/√(M_3)Γ_33; ]) Note that, once again, we are interested only in the squared elements of mixing matrix U, therefore both the i and the signs matrix don't affect expressions |U_ij|^2. §.§ Lower limit for mixing with specific flavor First of all, in three HNL case we take the arithmetic mean of mixing of all three HNL with one flavor as the minimal value. The reasoning is the same as when we have taken half sum in two HNL case: we know with certainty that mixing of at least one HNL with that flavor is equal to or greater than that minimal value. U_α^2 ≡1/3(|U_1α|^2 + |U_2α|^2 + |U_3α|^2 ) = 1/3 M_1 |Γ_1α|^2 + 1/3 M_2 |Γ_2α|^2 + 1/3 M_3 |Γ_3α|^2 The z_3 extremum criterion is: ∂ U^2_α/∂ z_3 = 1/3 M_2Γ_2α^*Γ_3α - 1/3 M_3Γ_3α^*Γ_2α = 0 This equation can have three solutions: Γ_2α = 0 Γ_3α = 0 M_2 = M_3, |Γ_2α| ≠ 0, |Γ_3α| ≠ 0, Γ_2α^*Γ_3α - Γ_2αΓ_3α^* = 0 When Γ_2α=0 it automatically means that Γ_3α^2 = Γ_2α^2 + Γ_3α^2 = Γ_4α^2 + λ_3α^2. When Γ_3α=0 it automatically means that Γ_2α^2 = Γ_2α^2 + Γ_3α^2 = Γ_4α^2 + λ_3α^2. Therefore, in accordance with the definition (<ref>): U^2_α|_∂ U^2_α/∂ z_3=0, M_2≠ M_3 = 1/3 M_1 |Γ_1α|^2 + 1/3 M |Γ_4α^2 + λ_3α^2|, where M=max{M_2,M_3}. In other words, solutions for (<ref>) and (<ref>) are identical except for the factors 1/3M_2 and 1/3M_3 before the value |Γ_4α^2 + λ_3α^2|. If M_2 = M_3 ≡ M the eq. (<ref>) takes form: U_α^2 = 1/3 M_1 |Γ_1α|^2 + 1/3 M(|Γ_2α|^2 + |Γ_3α|^2) Simplifying expression |Γ_2α|^2 + |Γ_3α|^2 one can obtain: |Γ_2α|^2 + |Γ_3α|^2 = (|λ_3α|^2+|Γ_4α|^2) cosh(2 y_3) + 2 Im[λ_3αΓ_4α^*] sinh(2 y_3) Therefore, if M_2 = M_3, U_α^2 doesn't depend on x_3 anymore. The y_3 extremum condition is: (|λ_3α|^2+|Γ_4α|^2) sinh(2 y_3) + 2 [λ_3αΓ_4α^*] cosh(2 y_3) = 0 After simplifying the eq. (<ref>), one can notice that it can be reduced to the equation (<ref>). Taking into account (<ref>) and (<ref>), and solving them with respect to y_3, one obtains: (|Γ_2α|^2 + |Γ_3α|^2)|_∂ U^2_α/∂ z_3=0 = √((|λ_3α|^2+|Γ_4α|^2)^2 - 4([λ_3αΓ_4α^*])^2)≡ |Γ_4α^2 + λ_3α^2| Combining (<ref>) and (<ref>) one obtains the same result as in (<ref>). The next step is to find the z_2 extremum of eq. (<ref>): 1/M_1Γ_1α^*Γ_4α - 1/MΓ_1αΓ_4α e^- 2 i γ_2α = 0, where λ_3α^2 + Γ_4α^2 = |λ_3α^2 + Γ_4α^2| e^ 2 i γ_2α Similarly to eq. (<ref>) it has three solutions: Γ_1α = 0 Γ_4α = 0 M_1 = M, Γ_1α≡ |Γ_1α| e^iγ_1α, |Γ_1α| ≠ 0, |Γ_4α| ≠ 0, e^2 i(γ_1α-γ_2α) = 1 Taking into account that Γ_1α^2 + Γ_4α^2 = λ_1α^2 + λ_2α^2 we obtain from eq. (<ref>): Γ_1α = 0 ⇒ U^2_α|_∂ U^2_α/∂ z_3=0, Γ_1α = 0 = 1/3M |λ_1α^2 + λ_2α^2 + λ_3α^2| Γ_4α = 0 ⇒ U^2_α|_∂ U^2_α/∂ z_3=0, Γ_4α = 0 = 1/3M_1|λ_1α^2 + λ_2α^2 | + 1/3M |λ_3α|^2 Eq. (<ref>) doesn't depend on z_1. Minimizing eq. (<ref>) over z_1 we obtain solutions λ_1α = 0 and λ_3α=0 and a special case for M_1=M. Obviously, if M_1=M_max≥ M, then for the minimal U_α^2 we take λ_3α=0 and obtain U_α|_∂ U^2_α/∂ z_3=0, Γ_1α = 0, λ_3α = 0 = 1/3M_max |λ_1α^2 + λ_2α^2|. If M_1<M=M_max, then both solutions (1/3M_1| λ_2α |^2 + 1/3M |λ_3α|^2)_|_λ_1α=0 and (1/M_1|λ_1α^2 + λ_2α^2 |)_|_λ_3α=0 yield us expressions that are greater than or equal to the solution (<ref>). Taking into account that |λ_1α^2 + λ_2α^2 + λ_3α^2 | = |m_αα|, we obtain that U_α_min, M_1≠ M^2 = |m_αα|/3M_max. Let's check the final case M_1=M. The criterion for the minimum on z_1 for the expression (<ref>): 1/3Mλ_3α(c_2 Γ_1α^* - (s_2 Γ_4α + λ_1α)e^-2 i γ_2α)= 0 One can notice that Γ_4α s_2 + λ_1α = λ_2α s_2 c_2 - λ_1α s_2^2 + λ_1α = c_2 (λ_1α c_2 + λ_2α s_2) = c_2 Γ_1α. Therefore, remembering from eq. (<ref>) that e^2 iγ_2α = e^2 iγ_1α we obtain: 1/3Mλ_3α c_2 (Γ_1α^* -Γ_1α e^-2 iγ_1α)) ≡ 0 This expression is equivalent to zero and therefore the sole restriction is the eq. (<ref>): Γ_1α^* -Γ_1α e^-2 iγ_2α = 0 The expression for the minimal U^2_α, in this case, can be rewritten as: U^2_α= 1/3M(|Γ_1α^2| + |Γ_4α^2 + λ_3α^2| ) = 1/3M(Γ_1α^2 + Γ_4α^2 + λ_3α^2 ) e^-2 iγ_2α = 1/3M(λ_1α^2 + λ_2α^2 + λ_3α^2 ) e^-2 iγ_2α The expression remains real, and we can take the absolute value of both the left and the right parts of the equation above and once again obtain: U^2_e_min = 1/3M_max |m_1 A_11^2 + m_2 A_21^2 + m_3 A_31^2| ≡|m_ee|/3M_max U^2_μ_min = 1/3M_max |m_1 A_12^2 + m_2 A_22^2 + m_3 A_32^2| ≡|m_μμ|/3M_max U^2_τ_min = 1/3M_max |m_1 A_13^2 + m_2 A_23^2 + m_3 A_33^2| ≡|m_ττ|/3M_max Here M=max{M_1, M_2, M_3}. Absolutely the same trick is used to investigate the M_1=M case in (<ref>), giving us the same answer. Note that to obtain eq. (<ref>) we fixate the values of z_1, z_2, z_3. Therefore, choosing U^2_e=U^2_e_min we also affect the values of U^2_μ, U^2_τ. One has to check that the values of U^2_μ and U^2_τ obtained in this way lay in the experimentally unrestricted area. We address that point in the next section. §.§ Specifics of lower limit in 3 HNL case Taking in the result we obtained in eq. (<ref>), the value |m_αα| = |m_1 A_1α^2 + m_2 A_2α^2 + m_3 A_3α^2| becomes crucial to know the lower bound on see-saw allowed region. Remember, |m_ee| emerges in the neutrinoless double β decay. We show in Figures <ref>, <ref>, <ref> the allowed regions of |m_ee|, |m_μμ|, |m_ττ| as a function of lightest neutrino mass. Looking at these results one can see three types of behaviour: * “constant”, c_1 < |m_αα|< c_2. Limits are approximately unchanging for small values of m_lightest. * “cancelling”. Allows solution |m_αα|=0 while the upper limit is usually the transitional stage between constant and linear dependence on m_lightest. * “linear”, a× m_lightest < |m_αα|< m_lightest, where 0<a<1. This is a characteristic behaviour for big values of m_lightest, although m_ττ experiences cancelling for all values of mass above a certain threshold. In particular, for normal hierarchy |m_ee| experiences “cancelling” in region 2.3 meV< m_lightest <6.6 meV; |m_μμ| doesn't experience “cancelling” at all; |m_ττ| experiences “cancelling” for m_lightest >0.59 eV. For inverted hierarchy |m_ee| doesn't experience “cancelling”; |m_μμ| experiences “cancelling” in region 2.4 meV< m_lightest <71 meV; |m_ττ| experiences “cancelling” for m_lightest >90 meV. We have some constraints on neutrino masses from laboratory experiments such as KATRIN <cit.>, that place limits on the expression m_ν^2 = ∑_i |U_ie|^2 m_i^2 < 0.64 eV, or, in our terms, m_lightest < 0.8 eV (the result is the same for both hierarchies in the leading order of magnitude and is practically reduced to m_lightest≈√(m_ν^2)). The sum of masses of active neutrinos plays a significant role in cosmology and therefore can be constrained somewhat by CMB analysis. In particular, a more conservative PLANCK limit gives us ∑ m_i < 0.26 eV <cit.>)which corresponds to m_lightest < 0.08 eV (the result is the same for both hierarchies in the leading order of magnitude). Other interpretations of PLANK data can make this limit as low as ∑ m_i < 0.12 eV <cit.>, which corresponds to m_lightest < 0.03 eV for normal hierarchy and m_lightest < 0.017 eV for inverted hierarchy. Figures <ref>, <ref>, <ref> show that this limit disfavors linear behaviour, but still allows cancelling to occur – only for normal hierarchy |m_ττ| cancelling area becomes disfavored. One can see that, depending on the unknown value of m_lightest, our limits (<ref>), (<ref>), (<ref>) can take zero value. That result is most intriguing because it suggests that when such “cancelling” occurs, the see-saw mechanism allows the case where no HNL is mixing with a given flavor while mixing with remaining flavors still provides active neutrinos three mass states. This assessment, though, doesn't consider HNL contributions to the effective neutrino masses m_ee, m_μμ, m_ττ and other non-leading order corrections to these expressions, which are extensively studied in the literature <cit.>. The study of the specifics of HNL physics in these `cancelling” areas falls outside of this work's scope but highlights the importance of the determination of the lightest active neutrino mass and active neutrino mass hierarchy for HNL physics. We have numerically checked the limits on mixing with a given flavor in “cancelling” regions in the presence of current experimental limits on mixing with other flavors to see if such cancelling might only be achieved in the already excluded areas. We have found that the mixing with one flavor in the “cancelling” area can be indistinguishable from zero (we have stopped the computation for |U_α|^2<10^-14) while mixing with other flavors takes allowed values, even if they can be close to the existing experimental limits. Therefore, “cancelling” areas are an interesting subject for more in-depth study in the future. In figure <ref> we show the line |m_αα|/M_N for several benchmark values of m_lightest together with current experimental limits and a usually adopted in literature limit 0.05eV/M_N <cit.>. One can notice that these lines lay lower than previous estimates for the see-saw limit and might significantly depend on the value of m_lightest and the hierarchy of active neutrinos. The determination of the hierarchy of active neutrino masses is expected to be achieved in the coming decade. These discoveries will be paramount for the determination of a lower boundary of HNLs mixing with active neutrinos in a see-saw model. If successful, the ongoing neutrinoless double beta decay experiments can help to determine the value of |m_ee| or at least significantly restrict it in case of no evidence. Because “cancelling” can't occur to mixing with electron neutrino and mixing with muon neutrino for the same hierarchy and the same value of m_lightest, it is important to study mixing with these two flavors in conjunction. Mixing with τ at present looks less promising: to rule out “cancelling” one needs to have a solid restriction on the lightest neutrino mass at level m_lightest <59 meV for normal hierarchy and m_lightest <9 meV for inverted hierarchy. At present, only more optimistic cosmological measures can provide such limits, which are somewhat model-dependent. §.§ Pseudodegenerate state We can write down similar relations between notional masses ℳ_iα = M_i |U_iα|^2 and |Γ_1α|^2 , |Γ_2α|^2 , |Γ_3α|^2 to the ones we had in two-HNL case: ℳ_3α + ℳ_2α = |Γ_1α^2 - m_αα| cosh (2 (y_3 - ŷ_̂α̂)), ℳ_3α - ℳ_2α = |Γ_1α^2 - m_αα| cos (2 (x_3 - x̂_̂α̂)), |ℳ_1α - |m_αα|| ≤ |Γ_1α^2 - m_αα| ≤ ℳ_1α + |m_αα|, here x̂_̂ê, ŷ_̂ê don't depend on x_3, y_3. Mixing satisfies the following criteria: |ℳ_3 α - ℳ_2 α| - |m_αα| ≤ ℳ_1 α ≤ ℳ_2 α + ℳ_3 α + |m_αα| |ℳ_3 α - ℳ_1 α| - |m_αα| ≤ ℳ_2 α ≤ ℳ_1 α + ℳ_3 α + |m_αα| |ℳ_2 α - ℳ_1 α| - |m_αα| ≤ ℳ_3 α ≤ ℳ_1 α + ℳ_2 α + |m_αα| We present the available region of (ℳ_2 α, ℳ_3 α) plane limited by these inequations for a fixed value of ℳ_1 α in figure <ref>. Obviously, if ℳ_1 α < |m_αα| that would mean that ℳ_3 α≥ 0 for ℳ_2 α < ℳ_1 α + |m_αα| (as it can't take negative values). We are interested in the following limit: |ℳ_3 α- ℳ_2 α| ≪ℳ_2 α + ℳ_3 α, that allows us to say that ℳ_2 α≈ℳ_3 α≈1/2( ℳ_2 α + ℳ_3 α). The underlining idea is the same: pseudodegenerate state is achieved when in eqs. (<ref>), (<ref>) we have cosh (2 (y_3 - ŷ_̂α̂)) ≫cos (2 (x_3 - x̂_̂α̂)). It automatically holds true for cosh (2 (y_3 - ŷ_̂α̂)) ≫ 1 or cos (2 (x_3 - x̂_̂α̂)) ≪ 1. For three HNL case the “scale factor” in eqs. (<ref>), (<ref>) depends not only on |m_αα|, but also on ℳ_1 α. From the experimental point of view, we are once again mostly interested in area |m_αα| ≪ℳ_i α, which leads us to two distinct limits when pseudodegenerate state is achieved: ℳ_1 α≪ℳ_3 α + ℳ_2 α, |m_αα| ≪ℳ_2 α + ℳ_3 α ℳ_1 α≫ |ℳ_3 α - ℳ_2 α|, |m_αα| ≪ℳ_2 α + ℳ_3 α In these formulas, we haven't fixed the hierarchy of notional masses in any way. As we can see from eq. (<ref>), pseudodegenerate state is automatically achieved when one notional mass ℳ_i α is much less than the other two. When all three parameters are of the same magnitude ℳ_1 α∼ℳ_2 α∼ℳ_3 α we generally don't have pseudodegenerate state, but it can still be achieved by satisfying the criteria (<ref>). It makes possible to achieve the pseudodegenerate state for two smaller notional masses. Curiously, according to (<ref>), the bigger notional mass can't be greater than the sum of the other two, inevitably still placing all three notional masses at the same scale of magnitude in such a scenario. In the pseudodegenerate case, we can once again study the ratio |U_e|^2:|U_μ|^2:|U_τ|^2 that won't depend on HNL masses. It also no longer depends on z_3 and, simplifying the expressions, we can obtain similar formulas and graphs to the ones we have for two HNL case. In limit (<ref>) this ratio is the same for mixing with the second and the third HNL. The resulting expression is more complex in three HNL case compared to two HNL case, as it depends not only on the active sector's CP-violating phases δ, α_1α_2, but also on z_1, z_2: |U_i α|^2/|U_i tot|^2 = |λ_3α|^2 + |Γ_4α|^2 ∓ 2 [ λ_3α^* Γ_4α]/∑_j=1^3(|λ_3j|^2 + |Γ_4j|^2 ∓ 2 [λ_3j^* Γ_4j]) One can notice that |U_i α|^2/|U_i tot|^2 = 0 is only possible when |m_αα|=0, which, as we have studied, can be achieved for some values of the mass of the lightest active neutrino. Values of |Γ_1α|^2 are defined by the same parameters that we scan in eq. (<ref>). Therefore, if we fixate this ratio and the mass M_1, we obtain |U_1α|^2 = |Γ_1α|^2/M_1. We still have freedom of choice for z_3 parameters, so inequation (<ref>) doesn't restrict possible values for the masses M_2, M_3 in pseudodegenerate case. At the same time |U_2α|^2 ≫M_1/M_2 |U_1α|^2 and |U_3α|^2 ≫M_1/M_3 |U_1α|^2 should still lay in areas that haven't been restricted by existing experimental searches. That automatically holds true in the case of hierarchy M_1 ≪ M_2, M_3, |U_1α|^2 ∼ |U_2α|^2 ∼ |U_3α|^2, traditionally used for models where lightest HNL serves the role of a dark matter particle, including ν MSM <cit.>. We present the resulting mixing in figure <ref>. Our results are consistent with similar study <cit.>. Once again, we obtain our results for a less strict pseudodegenerate limit (<ref>). In terms of available ratio |U_e|^2:|U_μ|^2:|U_τ|^2, one can see that for larger values of m_lightest, almost all parameter space is available. For example, for m_lightest = 0.08 eV only a small area near |U_e|^2 ≈ 1 remains forbidden. Therefore, from the perspective of searches for HNL, the question of hierarchy and the value of m_lightest remain, maybe, the most crucial factor, as they can greatly affect the available mixing ratio |U_e|^2:|U_μ|^2:|U_τ|^2. To summarize, we obtained similar results for the case of three HNLs to those we had for the case of two HNLs. The main difference for the lower boundary lies in its dependence on unknown value m_lightest. In special cases, the naive approach even allows the absence of mixing with a given flavor for all HNLs. Therefore at present, no conservative lower bound can be placed for any mixing. That can change, however, with the determination of active neutrino mass hierarchy: at that point either mixing with e or μ would have a solid boundary |m_αα|/3 M_HNL, reaching which would allow to rule out see-saw mechanism in a studied HNL mass region. That promotes the importance of simultaneous experimental studies of mixing of HNLs with both of these flavors. At the same time, this boundary lay significantly lower than existing experimental limits. The pseudegenerate limit also exists for three HNL case. We find that it is usually achieved when one HNL is effectively decoupled from the other two, but our results show that even so the resulting ratios for mixing with different flavors greatly differ from the one we have in two HNL case. Overall, three HNL case is significantly less restricted compared to the two HNL case, but can still provide viable hints in future HNL searches. § CONCLUSION In this work, we have studied the see-saw limits on the mixing of heavy neutral leptons with active neutrinos in the case of two and three such HNL. We obtained the minimal mixing value with any given flavor for both of these cases and presented the available area for expressions m_ee, m_μμ, m_ττ, closely related to them. Coincidentally, value |m_ee| turned out to be an effective neutrino mass that appears in neutrinoless double beta decay searches. In three HNL case we show that these expressions can become zero depending on active neutrino parameters, calling for a more in-depth study of next-order contributions that can affect said mixing. Moreover, we show that currently adopted see-saw limits lay significantly higher than our most conservative estimates. Unfortunately, three HNL mixing limit predictions are similar to the two HNL predictions but greatly depend on the value of m_lightest. As such, future experiments that can determine active neutrino mass hierarchy and further limit the value of m_lightest will greatly reduce the uncertainty for several aspects of HNL searches. We show that if we are to find an HNL signal in the near future, it would mean that HNL parameters are realised in a pseudodegenerate state in two HNL case. This state is realised for most of the usually studied variations of three HNL case as well when one HNL effectively decouples from the other two. For this state, the ratio of mixing with different flavors is greatly restricted, much more so for two HNL case than for three HNL. We show that this restriction criterion coincidences with the criterion already studied in the symmetrical limit <cit.>, favored from a theoretical point of view specific case of our pseudodegenerate limit. We proclaim that most of the results obtained in symmetrical limit can be extended to pseudodegenerate limit. We would like to express gratitude to Dmitry Gorbunov and Yury Kudenko for the valuable discussions and suggestions we had during this manuscript's preparation. This work was supported by the Russian Science Foundation RSF grant 21-12-00379. 10 Minkowski:1977sc P. Minkowski, μ→ eγ at a Rate of One Out of 10^9 Muon Decays?, https://doi.org/10.1016/0370-2693(77)90435-XPhys. Lett. 67B (1977) 421. Gell-Mann:1979vob M. Gell-Mann, P. Ramond and R. Slansky, Complex Spinors and Unified Theories, Conf. Proc. C 790927 (1979) 315 [https://arxiv.org/abs/1306.46691306.4669]. Mohapatra:1979ia R.N. Mohapatra and G. Senjanovic, Neutrino Mass and Spontaneous Parity Nonconservation, https://doi.org/10.1103/PhysRevLett.44.912Phys. Rev. Lett. 44 (1980) 912. Yanagida:1980xy T. Yanagida, Horizontal Symmetry and Masses of Neutrinos, https://doi.org/10.1143/PTP.64.1103Prog. Theor. Phys. 64 (1980) 1103. Schechter:1980gr J. Schechter and J.W.F. Valle, Neutrino Masses in SU(2) x U(1) Theories, https://doi.org/10.1103/PhysRevD.22.2227Phys. Rev. D 22 (1980) 2227. Fukugita:1986hr M. Fukugita and T. Yanagida, Baryogenesis Without Grand Unification, https://doi.org/10.1016/0370-2693(86)91126-3Phys. Lett. B 174 (1986) 45. Harvey:1990qw J.A. Harvey and M.S. Turner, Cosmological baryon and lepton number in the presence of electroweak fermion number violation, https://doi.org/10.1103/PhysRevD.42.3344Phys. Rev. D 42 (1990) 3344. Luty:1992un M.A. Luty, Baryogenesis via leptogenesis, https://doi.org/10.1103/PhysRevD.45.455Phys. Rev. D 45 (1992) 455. Plumacher:1996kc M. Plumacher, Baryogenesis and lepton number violation, https://doi.org/10.1007/s002880050418Z. Phys. C 74 (1997) 549 [https://arxiv.org/abs/hep-ph/9604229hep-ph/9604229]. Covi:1996wh L. Covi, E. Roulet and F. Vissani, CP violating decays in leptogenesis scenarios, https://doi.org/10.1016/0370-2693(96)00817-9Phys. Lett. B 384 (1996) 169 [https://arxiv.org/abs/hep-ph/9605319hep-ph/9605319]. Flanz:1996fb M. Flanz, E.A. Paschos, U. Sarkar and J. Weiss, Baryogenesis through mixing of heavy Majorana neutrinos, https://doi.org/10.1016/S0370-2693(96)01337-8Phys. Lett. B 389 (1996) 693 [https://arxiv.org/abs/hep-ph/9607310hep-ph/9607310]. Canetti:2012zc L. Canetti, M. Drewes and M. Shaposhnikov, Matter and Antimatter in the Universe, https://doi.org/10.1088/1367-2630/14/9/095012New J. Phys. 14 (2012) 095012 [https://arxiv.org/abs/1204.41861204.4186]. Dodelson:1993je S. Dodelson and L.M. Widrow, Sterile-neutrinos as dark matter, https://doi.org/10.1103/PhysRevLett.72.17Phys. Rev. Lett. 72 (1994) 17 [https://arxiv.org/abs/hep-ph/9303287hep-ph/9303287]. Gorbunov:2014ypa D. Gorbunov and I. Timiryasov, Testing νMSM with indirect searches, https://doi.org/10.1016/j.physletb.2015.02.060Phys. Lett. B 745 (2015) 29 [https://arxiv.org/abs/1412.77511412.7751]. Barinov:2021mjj V. Barinov and D. Gorbunov, BEST impact on sterile neutrino hypothesis, https://doi.org/10.1103/PhysRevD.105.L051703Phys. Rev. D 105 (2022) L051703 [https://arxiv.org/abs/2109.146542109.14654]. Serebrov:2023vfo A.P. Serebrov, R.M. Samoilov and O.M. Zherebtsov, The result of the Neutrino-4 experiment, sterile neutrinos, dark matter and the Standard Model, https://arxiv.org/abs/2306.099622306.09962. Abdullahi:2022jlv A.M. Abdullahi et al., The present and future status of heavy neutral leptons, https://doi.org/10.1088/1361-6471/ac98f9J. Phys. G 50 (2023) 020501 [https://arxiv.org/abs/2203.080392203.08039]. E949:2014gsn E949 collaboration, Search for heavy neutrinos in K^+→μ^+ν_H decays, https://doi.org/10.1103/PhysRevD.91.052001Phys. Rev. D 91 (2015) 052001 [https://arxiv.org/abs/1411.39631411.3963]. NA62:2020mcv NA62 collaboration, Search for heavy neutral lepton production in K^+ decays to positrons, https://doi.org/10.1016/j.physletb.2020.135599Phys. Lett. B 807 (2020) 135599 [https://arxiv.org/abs/2005.095752005.09575]. T2K:2019jwa T2K collaboration, Search for heavy neutrinos with the T2K near detector ND280, https://doi.org/10.1103/PhysRevD.100.052006Phys. Rev. D 100 (2019) 052006 [https://arxiv.org/abs/1902.075981902.07598]. Britton:1992xv D.I. Britton et al., Improved search for massive neutrinos in pi+ — e+ neutrino decay, https://doi.org/10.1103/PhysRevD.46.R885Phys. Rev. D 46 (1992) R885. PIENU:2017wbj PIENU collaboration, Improved search for heavy neutrinos in the decay π→ eν, https://doi.org/10.1103/PhysRevD.97.072012Phys. Rev. D 97 (2018) 072012 [https://arxiv.org/abs/1712.032751712.03275]. Gorbunov:2013dta D. Gorbunov and A. Panin, On the minimal active-sterile neutrino mixing in seesaw type I mechanism with sterile neutrinos at GeV scale, https://doi.org/10.1103/PhysRevD.89.017302Phys. Rev. D 89 (2014) 017302 [https://arxiv.org/abs/1312.28871312.2887]. Krasnov:2018odt I. Krasnov and T. Grigorin-Ryabov, Numerical estimate of minimal active-sterile neutrino mixing for sterile neutrinos at GeV scale, https://doi.org/10.1051/epjconf/201819103003EPJ Web Conf. 191 (2018) 03003 [https://arxiv.org/abs/1802.047281802.04728]. Casas:2001sr J.A. Casas and A. Ibarra, Oscillating neutrinos and μ→ e, γ, https://doi.org/10.1016/S0550-3213(01)00475-8Nucl. Phys. B 618 (2001) 171 [https://arxiv.org/abs/hep-ph/0103065hep-ph/0103065]. Workman:2022ynf Particle Data Group collaboration, Review of Particle Physics, https://doi.org/10.1093/ptep/ptac097PTEP 2022 (2022) 083C01. Atre:2009rg A. Atre, T. Han, S. Pascoli and B. Zhang, The Search for Heavy Majorana Neutrinos, https://doi.org/10.1088/1126-6708/2009/05/030JHEP 05 (2009) 030 [https://arxiv.org/abs/0901.35890901.3589]. Drewes:2018gkc M. Drewes, J. Hajer, J. Klaric and G. Lanfranchi, NA62 sensitivity to heavy neutral leptons in the low scale seesaw model, https://doi.org/10.1007/JHEP07(2018)105JHEP 07 (2018) 105 [https://arxiv.org/abs/1801.042071801.04207]. Drewes:2022akb M. Drewes, J. Klarić and J. López-Pavón, New benchmark models for heavy neutral lepton searches, https://doi.org/10.1140/epjc/s10052-022-11100-7Eur. Phys. J. C 82 (2022) 1176 [https://arxiv.org/abs/2207.027422207.02742]. KATRIN:2021uub KATRIN collaboration, Direct neutrino-mass measurement with sub-electronvolt sensitivity, https://doi.org/10.1038/s41567-021-01463-1Nature Phys. 18 (2022) 160 [https://arxiv.org/abs/2105.085332105.08533]. eBOSS:2020yzd eBOSS collaboration, Completed SDSS-IV extended Baryon Oscillation Spectroscopic Survey: Cosmological implications from two decades of spectroscopic surveys at the Apache Point Observatory, https://doi.org/10.1103/PhysRevD.103.083533Phys. Rev. D 103 (2021) 083533 [https://arxiv.org/abs/2007.089912007.08991]. Planck:2018vyg Planck collaboration, Planck 2018 results. VI. Cosmological parameters, https://doi.org/10.1051/0004-6361/201833910Astron. Astrophys. 641 (2020) A6 [https://arxiv.org/abs/1807.062091807.06209]. Bezrukov:2005mx F.L. Bezrukov, nu MSM-predictions for neutrinoless double beta decay, https://doi.org/10.1103/PhysRevD.72.071303Phys. Rev. D 72 (2005) 071303 [https://arxiv.org/abs/hep-ph/0505247hep-ph/0505247]. Bolton:2022tds P.D. Bolton, F.F. Deppisch, M. Rai and Z. Zhang, Probing the Nature of Heavy Neutral Leptons in Direct Searches and Neutrinoless Double Beta Decay, https://arxiv.org/abs/2212.146902212.14690. Schubert:2022lcp J.L. Schubert and O. Ruchayskiy, Neutrinoless double-beta decay at colliders: interference between Majorana states, https://arxiv.org/abs/2210.112942210.11294. Akhmedov:1998qx E.K. Akhmedov, V.A. Rubakov and A.Y. Smirnov, Baryogenesis via neutrino oscillations, https://doi.org/10.1103/PhysRevLett.81.1359Phys. Rev. Lett. 81 (1998) 1359 [https://arxiv.org/abs/hep-ph/9803255hep-ph/9803255]. Asaka:2011pb T. Asaka, S. Eijima and H. Ishida, Mixing of Active and Sterile Neutrinos, https://doi.org/10.1007/JHEP04(2011)011JHEP 04 (2011) 011 [https://arxiv.org/abs/1101.13821101.1382]. Chrzaszcz:2019inj M. Chrzaszcz, M. Drewes, T.E. Gonzalo, J. Harz, S. Krishnamurthy and C. Weniger, A frequentist analysis of three right-handed neutrinos with GAMBIT, https://doi.org/10.1140/epjc/s10052-020-8073-9Eur. Phys. J. C 80 (2020) 569 [https://arxiv.org/abs/1908.023021908.02302]. § ACTIVE SECTOR PARAMETRIZATION Experiments provide us with differences in the squared mass of active neutrinos <cit.>: [ best fit 3σ range; Δ m_21^2 [ 10^-5 eV^2] = 7.39, 6.79 - 8.01; |Δ m_32^2| [ 10^-3 eV^2] = 2.449, 2.358 - 2.544; (2.509), (2.416 - 2.603) ] where, Δ m_21^2 = m_2^2 - m_1^2 and Δ m_32^2 = m_3^2 - m_2^2. It is usually defined that m_2>m_1 (just for convenience), but we don't know which of m_1, m_3 is smaller. This results in two viable scenarios for neutrino mass hierarchy: normal (m_1<m_2<m_3) and inverted (m_3<m_1<m_2). To date, there is no viable method to determine the mass of the lightest neutrino m_lightest, yet there is hope that neutrinoless double beta decay searches can put some limits on this parameter <cit.>. Cosmology provides some insights on the upper limit of the value of the sum of three active neutrino masses and, consequently, on the value of m_lightest, but can be a subject of theoretical discourse in various models of physics beyond the standard model. The role of m_lightest in active-sterile mixing is examined in more detail in section <ref>. For the normal hierarchy, we have: [ m_1 = m_lightest; m_2 = √(m_lightest^2 + Δm_21^2); m_3 = √(m_lightest^2 + Δm_21^2 + |Δm_32^2|), ] and for the inverted hierarchy: [ m_3 = m_lightest; m_1 = √(m_lightest^2 - Δm_21^2 + |Δm_32^2| ); m_2 = √(m_lightest^2 + |Δm_32^2| ). ] Of utmost importance is the relation of flavor basis to mass basis, which is described using Pontecorvo-Maki-Nakagawa-Sakata matrix U_PMNS multiplied by Majorana phase matrix: ( [ ν_1; ν_2; ν_3; ]) = ([ e^i α_1/2 0 0; 0 e^i α_2/2 0; 0 0 1; ]) U_PMNS^†([ ν_e; ν_μ; ν_τ; ]), Here we use <cit.>: U_PMNS^†= ([ c_13 c_12 -c_23 s_12 - s_23 s_13 c_12 e^-i δ s_23 s_12 - c_23 s_13 c_12 e^-i δ; c_13 s_12 c_23 c_12 - s_23 s_13 s_12 e^-i δ -s_23 c_12 - c_23 s_13 s_12 e^-i δ; s_13e^i δ s_23 c_13 c_23 c_13; ]), where c_ij and s_ij stand for cosθ_ij and sinθ_ij, with i,j=1,2,3, i<j. The currently adopted values of these parameters <cit.>): [ sin^2 θ_12 = 0.310, 0.275 - 0.350; sin^2 θ_23 = 0.558, 0.427 - 0.609; (0.563), (0.430 - 0.612); sin^2 θ_13 = 0.02241, 0.02046 - 0.02440; (0.02261), (0.02066 -0.02461). ] At times, it is more convenient to use angles θ_ij themselves: [ θ_12 = 33.82^∘ ; θ_23 = 48.3^∘ (48.6^∘); θ_13 = 8.61^∘ (8.65^∘) ] There is still little evidence to prefer a certain value of δ, although there are some hints <cit.>. Majorana CP-violating phases α_1, α_2, while being vital part of most theories with heavy neutral leptons, so far have yet to be proven to exist, let alone be limited in any way. As a general rule, we treat all three phases as free parameters, δ∈ [0,2 π), α_1 ∈ [- π, π), α_2 ∈ [- π, π).
http://arxiv.org/abs/2307.02528v1
20230705180000
Phenomenology of bond and flux orders in kagome metals
[ "Glenn Wagner", "Chunyu Guo", "Philip J. W. Moll", "Titus Neupert", "Mark H. Fischer" ]
cond-mat.str-el
[ "cond-mat.str-el", "cond-mat.mtrl-sci" ]
Department of Physics, University of Zurich, Winterthurerstrasse 190, 8057 Zurich, Switzerland Max Planck Institute for the Structure and Dynamics of Matter, Hamburg, Germany Max Planck Institute for the Structure and Dynamics of Matter, Hamburg, Germany Department of Physics, University of Zurich, Winterthurerstrasse 190, 8057 Zurich, Switzerland Department of Physics, University of Zurich, Winterthurerstrasse 190, 8057 Zurich, Switzerland Despite much experimental and theoretical work, the nature of the charge order in the kagome metals belonging to the family of materials AV_3Sb_5 (A=Cs,Rb,K) remains controversial. A crucial ingredient for the identification of the ordering in these materials is their response to external perturbations, such as strain or magnetic fields. To this end, we provide a comprehensive symmetry classification of the possible charge orders in kagome materials with a 2×2 increase of the unit cell. Motivated by the experimental reports of time-reversal-symmetry breaking and rotational anisotropy, we consider the interdependence of flux and bond orders. Deriving the relevant Landau free energy for possible orders, we study the effect of symmetry-breaking perturbations such as strain and magnetic fields. Our results, thus, provide a roadmap for future tests of these intricate orders. Phenomenology of bond and flux orders in kagome metals Mark H. Fischer August 1, 2023 ====================================================== § INTRODUCTION Starting with the formation of crystalline materials, spontaneous symmetry breaking is a concept foundational to condensed matter physics and guides our categorical understanding of phases of matter <cit.>. Charge order can break spatial symmetries in the form of nematicity (rotational symmetry) or density waves (translational symmetry), superconductors break particle number conservation, magnets break spin-rotation and time-reversal symmetry (TRS). Particularly interesting situations arise, when the order comprises multiple degenerate components, in other words when the irreducible representation (irrep) corresponding to the order is multi-dimensional or when order parameters from different irreps are (accidentally) almost degenerate. On the one hand, this situation can lead to additional breaking of symmetries: in the case of a superconductor, a complex superposition of two order-parameter components leads to a chiral superconductor spontaneously breaking TRS <cit.>. On the other hand, multi-component orders can also help restore symmetries: A q⃗ = (π,0) charge order breaks both translation and rotation symmetry, while a superposition of (π,0) and (0,π) charge order describes checkerboard order and restores rotational symmetry <cit.>. A charge order of unconventional nature has recently been identified in a family of quasi-two-dimensional metals AV_3Sb_5 (A=Cs,Rb,K), whose main structural motif is a kagome lattice of vanadium atoms <cit.>. The charge-ordered phase, which sets in at about 70–100 K, has been studied with a variety of experimental techniques including angle-resolved photoemission spectroscopy <cit.>, scanning tunneling spectroscopy <cit.>, nuclear magnetic resonance <cit.>, X-ray scattering <cit.>, muon spin-relaxation measurements <cit.>, thermal <cit.> and electrical <cit.> transport, as well as magneto-optical Kerr effect <cit.>. Broad consensus exists that the charge order creates a 2×2 superstructure within the plane, whereas the out-of-plane ordering (such as 2×2×1, 2×2×2 or 2×2×4) is still under debate <cit.>. In addition, controversial results as to whether or not charge order spontaneously breaks time-reversal and/or rotational symmetry have been reported. Several phenomena usually associated with spontaneous TRS breaking (TRSB) can be induced by a (weak) magnetic field, such as a giant anomalous Hall effect <cit.> and non-vanishing Kerr rotations <cit.>. However, whether the system breaks time-reversal symmetry spontaneously, in other words at zero magnetic field, has been challenged for instance by the absence of a Kerr effect in this regime <cit.>. Furthermore, although there are some reports of TRSB at the charge-ordering temperature, there is a large increase in the respective signals around 30–50 K. Finally, while there are many reports of (rotational) anisotropy in these materials <cit.>, a recent study challenges these reports <cit.>. The conflicting experimental reports of anisotropy and spontaneous TRSB naturally raise the question whether the experiments themselve change the state probed and if so, how the different ordering possibilities can still be distinguished experimentally. Importantly, the interplay of different components of the order parameter is crucial, since charge-density-wave order on the kagome lattice with an ordering vector Q⃗ = M⃗= Γ M forms a three-dimensional irrep due to the three inequivalent M points. A single order-parameter component, corresponding to a single M-point ordering vector, breaks the rotational symmetry, whereas an equal superposition of all three could restore the rotational symmetry of the lattice. Furthermore, a complex order, arising due to the superposition of (almost degenerate) bond order and flux order would lead to TRSB. To understand the physics of charge density waves in the kagome systems better and discuss their coupling to external perturbations that could help identify them, such as magnetic fields or strain, the proper symmetry of the ordering possibilities need to be studied first. Such an analysis then allows for the derivation of an effective Landau description. While both electronic <cit.> and phonon <cit.> instabilities have been studied as mechanisms for the charge ordering in the literature, such an approach has the advantage of being agnostic to the microscopic mechanism of the ordering. Here, we present a comprehensive symmetry classification of all in-plane charge orders—including onsite charge modulations, bond orders, and flux/orbital current orders—on the kagome lattice with 2× 2 unit cell following the scheme introduced in Ref. <cit.>. This classification scheme provides a transparent way to develop an effective theory in the spirit of a Landau free energy for all charge orders, their coupling to each other and to magnetic fields and strain. This theory is applicable to, but not limited to, AV_3Sb_5. Since flux and bond orders both renormalize electron hopping integrals, we consider them intertwined. However, with the two generically transforming as different irreps, we treat them as different, yet coupled, order parameters, instead of one single complex order parameter. We therefore do not adopt the usual presumption that one order parameter dominates at and near the phase transition, but we study the interplay of one flux and one bond order parameter. This consideration results in a few scenarios, which can be sharply distinguished experimentally through their behavior in a magnetic field and by the presence or absence of anisotropy. § SUMMARY OF RESULTS We start by considering only in-plane ordering. Motivated by experiments that have established a 2×2 in-plane increase in the unit cell size, we consider all possible charge, bond, and flux orders on the kagome lattice arising from nearest-neighbor interactions. In Sec. <ref>, we use a group theory analysis to classify all these orders within the framework introduced by Venderbos <cit.>. The translational-symmetry-breaking orders can be classified in four different irreps of the (enlarged) symmetry group, labelled F_1,…,F_4. The irreps are three-dimensional and F_1,2 are even under C_2 while F_3,4 are odd. Bond order Δ⃗ can fall in any of the four translationally symmetry breaking irreps, while flux order Δ⃗' has to either fall in the F_2 or F_4 irrep. The observation of time-reversal-symmetry breaking shows that flux order is present (although whether or not a small magnetic field is necessary to establish the flux order is not clear). Bond and flux order naturally couple to one another and in Sec. <ref>, we therefore consider Landau theories that include both orders. The second and fourth order terms in the free energy are identical regardless of which orders are combined, since these terms are always C_2 symmetric. However, the third-order terms can differ, since they are not guaranteed to respect C_2. If the bond order is even under C_2, then a term of the form Δ^3 is allowed. In addition the bond and flux order can couple via ΔΔ'^2 (the C_2 eigenvalue of the flux is irrelevant here, since the flux order needs to appear squared for the free energy to respect time-reversal symmetry). To gain intuition about the Landau free energy, in Sec. <ref> we start by discussing the simpler case where only charge order is present. We discuss separately the cases where a third-order term is present or absent, since this significantly impacts the phase diagram. In Sec. <ref>, we then discuss the more complicated case with both charge and flux order present. We again derive the phase diagrams for the cases with and without the third-order term. In the phase diagrams, we use the relative critical temperature of the bond and flux orders as a tuning parameter. We obtain three types of phases: Either only Δ⃗ or only Δ⃗' is present, or both are present simultaneously. The latter two phases break time-reversal symmetry. These different phases may or may not spontaneously break the C_6 symmetry depending on the specific irreps of the respective orders. One way to distinguish the different possible order parameters is to consider the impact of symmetry-breaking perturbations. In Sec. <ref>, we therefore investigate the effects of strain and an out-of-plane magnetic field. The lowest-order coupling to such a magnetic field B takes the form BΔΔ'. Since the magnetic field is odd under in-plane mirror symmetries, this term is only allowed when the product ΔΔ' is odd under these mirrors. Up to this point, the Landau theory considered was very general and relied only on well-established experimental facts on the kagome metals. In Sec. <ref>, we review the experimental situation in more detail and suggest that the most likely order parameter combination to describe the experiments is a F_1 bond order with a F_2 flux order. Note, however, that while this conclusion relies on experimental input that is less well-established, we emphasize that the general discussion does not. Finally, in Sec. <ref> we propose several experiments including elastoresistance, STM and resonant ultrasound spectroscopy that in combination with the Landau analysis would allow to clearly establish the type of ordering in the kagome metals. § SYMMETRY ANALYSIS In the following, we consider a single kagome layer with point group C_6v [Note that the full three-dimensional point group is D_6h=C_6v⊗σ_h with σ_h denoting the mirror z↦ -z, but we restrict our considerations to C_6v for simplicity.]. Further, we study translational symmetry breaking arising from M-point ordering vectors in the kagome Brillouin zone. This ordering vector arises due to three van-Hove singularities (VHS) of the band structure of the kagome lattice at the three M points <cit.>: M⃗_1,3=π/a √(3)( ±√(3), 1), M⃗_2=2 π/a √(3)(0,-1), where a is the lattice constant. Close to the van Hove filling, the low-energy physics is dominated by scattering between these VHS, with momentum transfers corresponding to momentum differences between the M-points. These nesting vectors are also M-point vectors, since M⃗_2-M⃗_3 ≡M⃗_1 (up to a reciprocal lattice vector) and the order parameters consist of superpositions of waves with wavevectors M⃗_i. Therefore, the order parameter in the unit cell centered at R⃗ will be a linear superposition of the components of v⃗(R⃗)=([ cosM⃗_1 ·R⃗; cosM⃗_2 ·R⃗; cosM⃗_3 ·R⃗ ]) leading to an increase in the size of the unit cell by 2×2. The corresponding Bragg peaks are indeed seen experimentally in X-ray diffraction <cit.> and STM <cit.>. The possible bond and flux orders have been classified previously in Refs. Feng and Christensen2 regarding the point group D_6h. We here choose a different route by following the classification scheme introduced in Ref. Venderbos and restricting ourselves to the point group C_6v for simplicity. In this real-space scheme, the relevant symmetry group is enlarged to C_6v”', where the primes indicate that the point group of the kagome lattice, C_6v, contains three additional elements corresponding to translations t⃗_i (with i=1,2,3) that describe the translational symmetry breaking. The enlarged unit cell as well as the symmetry operations forming the group C_6v”' are shown in Fig. <ref>. The group C_6v”' has four one-dimensional and two two-dimensional irreps that are trivial under translation and are thus simply analogous to the irreps of C_6v. There are also four three-dimensional irreps F_i, which, in contrast, are non-trivial under translations. Their dimensionality directly follows from the fact that there are three M points in the Brillouin zone. Table <ref> presents the character table for the irreps of C_6v”'. There are three fundamental types of order, which, following the nomenclature adapted in Ref. Venderbos, we denote as site, bond, and flux order, with the latter two being real and imaginary renormalizations of the hopping integrals. We deduce which irreps of C_6v”' these orders on the kagome lattice decompose into, considering only nearest-neighbor order for the bond and flux order. To do so, we consider the permutation matrices 𝒫 describing the action of the symmetry operators on the sites, bonds and fluxes. These permutation matrices are representations of C_6v”' and, using the character Tab. <ref>, they can be decomposed into irreps, see appendix <ref>. Site order corresponds to a modulation of ⟨ a^†_iσ(R⃗) a_iσ(R⃗)⟩, the local electron density, where a^†_iσ(R⃗) creates an electron with spin σ=↑,↓ at a position 𝐑+δ⃗_i. Here, R⃗ points to the unit cell center and δ_i is the position of the sublattice site i=A,B,C with respect to R⃗. Denoting the order-parameter components on each sublattice by a three-dimensional vector s⃗_i, the site order takes the form ⟨ a^†_iσ(R⃗) a_iσ(R⃗)⟩=s⃗_i·v⃗(R⃗). With 12 sites in the 2×2–increased unit cell, the representation is 12 dimensional. The possible site orders then decompose into the following irreps 𝒫_s=A_1+E_2+F_1+F_3+F_4. A general bond order corresponds to modulations ⟨ a^†_Aσ(R⃗) a_Bσ(R⃗)⟩ =w⃗_1·v⃗(R⃗) ⟨ a^†_Aσ(R⃗) a_Cσ(R⃗)⟩ =w⃗_2·v⃗(R⃗) ⟨ a^†_Aσ(R⃗) a_Bσ(R⃗-t⃗_3)⟩ =w⃗_3·v⃗(R⃗) ⟨ a^†_Aσ(R⃗) a_Cσ(R⃗+t⃗_2)⟩ =w⃗_4·v⃗(R⃗) ⟨ a^†_Bσ(R⃗) a_Cσ(R⃗+t⃗_2)⟩ =w⃗_5·v⃗(R⃗) ⟨ a^†_Cσ(R⃗) a_Bσ(R⃗-t⃗_3)⟩ =w⃗_6·v⃗(R⃗). We refer to real components of these w⃗_i as bond order. There are 24 bonds within the 2× 2 unit cell and we find the bond order decomposition 𝒫_b=A_1+B_1+E_1+E_2+2F_1+F_2+2F_3+F_4. Finally, flux orders imply an imaginary component of the w⃗_i [An exception would be specific gauges for flux orders with exactly 0 or π flux per plaquette. However, here we are interested in phases with generic values of flux.]. Equivalently, one can think of the resulting flux threading through the plaquettes. There are 12 plaquettes within the 2× 2 enlarged unit cell with possible orders decomposing into the irreps 𝒫_ϕ=2A_2'+ B_2'+2 F_2'+F_4'. In addition to spatial symmetries, the flux order breaks TRS, which we denote by a prime. Figure <ref> shows examples of different types of translational symmetry breaking bond and flux orders for each of the four three-dimensional irreps. § LANDAU THEORY Having categorized the possible site, bond, and flux orders, we can construct the free energy within Landau theory for different order parameters and their combinations. Having constructing such free energies then allows us to map out possible phase diagrams of the kagome metals, which capture the interplay of these order parameters, before studying the effect of external perturbations. The different responses to external perturbations can provide distinguishing experimental signatures. Previous theoretical work has already studied several types of Landau free energies for the kagome metals. In particular, Ref. Christensen studied coupling between an M-point and an L-point order, and Ref. YangTheory studied the coupling between an imaginary M-point order and superconductivity. Finally, Refs. Denner,Grandi,Lin,Park,tazai2022chargeloop,Christensen2 studied coupling between real and imaginary M-point orders, which is the case we aim to study here. Such a combination of orders is motivated by the observation of TRSB in experiments, which indicates that flux order is present at least under certain circumstances. Furthermore, since flux and bond order are the imaginary and real components, respectively, of the same (nearest-neighbor) order parameter, it is natural to consider a theory including both. However, most of the previous literature considered the bond order parameter to be a complex number, hence mixing bond and flux order of our classification. While this might be physically motivated and suggests a proximity of one order to the other, the real and imaginary components generally transform as different irreps, which manifests itself in different critical temperatures in these combined theories. We, thus, follow a different approach and in the following use the results from the order-parameter classification of Sec. <ref> with the multiplication Tab. <ref> to write a family of free energies for two coupled order parameters transforming under two (different) three-dimensional irreps: a time-reversal symmetric F_i and a TRS breaking F_j'. In particular, with the free energy transforming as a scalar, only combinations of irreps, whose decomposition includes A_1, can appear. For this purpose, we first derive the Landau free energy ℱ[Δ⃗, Δ⃗'] up to fourth order in the order parameters Δ⃗ and Δ⃗', before studying their coupling to strain and (out-of-plane) magnetic fields. §.§ Homogeneous M-point free energy The quadratic terms in the free energy take the same form, irrespective of the irrep combination. We include the temperature dependence to the quadratic coefficients ℱ^(2)[Δ⃗, Δ⃗']= α(T-T_ c)(Δ_1^2+Δ_2^2+Δ_3^2) +α'(T-T_ c')(Δ_1'^2+Δ_2'^2+Δ_3'^2). The parameter T_ c (T_ c') is the temperature, at which the coefficient for the quadratic term in Δ⃗ (Δ⃗') changes sign. While this sign change signals that the solution with vanishing order parameter becomes unstable, this temperature does not necessarily coincide with the critical temperature at which the order parameter acquires a non-zero value. The third-order terms, as well as the coupling between the order parameters can shift the critical temperature away from T_ c (T_ c'). The third-order terms may or may not be allowed, depending on the transformation properties of the irreps, in particular, their transformation behavior under C_2. In Tab. <ref>, we list the allowed third-order terms for the different order-parameter combinations. When the terms are allowed, they take the form <cit.> ℱ^(3,0)[Δ⃗, Δ⃗']= β_1Δ_1Δ_2Δ_3 ℱ^(1,2)[Δ⃗, Δ⃗']= β_2(Δ_1Δ_2'Δ_3'+Δ_1'Δ_2Δ_3'+Δ_1'Δ_2'Δ_3). The term ℱ^(3,0) is allowed if A_1⊂ F_i⊗ F_i⊗ F_i, while the term ℱ^(1,2) is allowed if A_1⊂ F_i⊗ F_j'⊗ F_j'. Due to time-reversal symmetry, there is neither a linear nor a cubic term for Δ⃗'. Note that the third-order term in Eq. (<ref>) couples Δ⃗ and Δ⃗' in such a way that any finite Δ⃗' induces a finite Δ⃗ order, but not the other way around. In other words, while the TRS-preserving bond order may exist by itself, a finite Δ⃗' always induces bond order transforming as F_1 and F_2. In the following, we will see how the presence of these third-order terms significantly alters the phenomenology of the ordered phase as compared to the case without such terms. The fourth-order terms are again generically the same for all order-parameter combinations <cit.> ℱ^(4)= λ_1(Δ_1^2+Δ_2^2+Δ_3^2)^2 +λ_2(Δ_1'^2+Δ_2'^2+Δ_3'^2)^2 +λ_3(Δ_1^2+Δ_2^2+Δ_3^2)(Δ_1'^2+Δ_2'^2+Δ_3'^2) +λ_4(Δ_1^2Δ_2^2+Δ_1^2Δ_3^2+Δ_2^2Δ_3^2) +λ_5(Δ_1'^2Δ_2'^2+Δ_1'^2Δ_3'^2+Δ_2'^2Δ_3'^2) +λ_6(Δ_1^2Δ_2'^2+Δ_1^2Δ_3'^2+Δ_2^2Δ_3'^2 +Δ_1'^2Δ_2^2+Δ_1'^2Δ_3^2+Δ_2'^2Δ_3^2) +λ_7(Δ_1Δ_1'Δ_2Δ_2'+Δ_1Δ_1'Δ_3Δ_3'+Δ_2Δ_2'Δ_3Δ_3'). The inclusion of the fourth-order terms is necessary for the thermodynamic stability of the free energy. Furthermore, in the absence of third-order terms, the fourth-order terms determine the form of the symmetry-breaking combination, such as anisotropic or TRS-breaking, below T_ c. ℱ^(4) can also couple Δ⃗ and Δ⃗', such that, for example, the λ_3 term describes attraction (repulsion) between the order parameters for λ_3<0 (λ_3>0). Note, however, that due to the quadratic nature of this term, such an interaction between the order parameters only amounts to a change of the critical temperature of the secondary order parameter. §.§ Comment on three-dimensional ordering So far, our discussion has been based on a purely two-dimensional model. Despite the layered structure of the kagome metals, it is possible that three-dimensionality is important. Since some experiments report a 2×2×2 increase in the size of the unit cell <cit.> and DFT calculations report instabilities at the L-points <cit.>, we are driven to consider L-point charge ordering as well. There are again three inequivalent wavevectors, which in this case are L⃗_1,3=( ±π/a , π/a √(3),π/c), L⃗_2=(0,-2 π/a √(3),π/c), where c is the lattice constant in the vertical direction. Here, pure L-point third-order terms in the free energy will be absent, since the three momenta do not add up to zero. However, the second- and fourth-order terms will be present and unchanged with respect to the previous case, since only even powers of the order parameters appear and hence the L-point momenta in the z-direction add up to zero. In this sense, the pure L-point charge order has the same Landau theory as a pure M-point flux order <cit.>. In the case of the M-point order, it is TRS that forces the absence of a pure third-order term, whereas in the L-point case, it is z-momentum conservation. In general, if we combine an M-point charge order Δ⃗^M with L-point charge (or flux) order Δ⃗^L (or Δ⃗^' L), then all the allowed third-order terms take the schematic form Δ⃗^M(Δ⃗^L)^2, Δ⃗^M(Δ⃗^' L)^2. These terms are only present as long as the M-point order is even under C_2. Therefore, three-dimensional L-point bond or flux order always induces a subsidiary M-point charge order. § PURE BOND ORDER We start our discussion with pure bond order without any flux order. This case has already been extensively covered in Ref. Christensen and here we just summarize the most important results. The possible Landau theories only differ in the presence or absence of the third-order term, Eq. (<ref>). As can be seen from Tab. <ref>, F_1 and F_2 orders at the M point have a third-order term, while the F_3 and F_4 orders at the M point, as well as any L-point orders lack these terms. Note again that the absence of the third-order term for F_3 and F_4 is an immediate consequence of them being odd under C_2. §.§ Without third-order term In this case, we have a second-order phase transition, when the coefficient of the quadratic term switches sign, in other words exactly at T_ c. The specific form of the ordering is determined by the fourth-order terms. With only Δ⃗ present, only the λ_1 and λ_4 fourth-order terms are present. λ_1 alone does not break any degeneracy, irrespective of its value. λ_4>0 then leads to an anisotropic solution immediately below the charge-ordering temperature: Only one of the components Δ_i will be non-zero. On the contrary, λ_4<0 favors an isotropic solution. There are two degenerate isotropic solutions, since the free energy is independent of the sign of the individual order-parameter components: For the F_3 and F_4 irreps, the cases with all components the same sign (Δ_1=Δ_2=Δ_3) and one component with the opposite sign (Δ_1=-Δ_2=-Δ_3 or cyclic variations) are degenerate and related by C_2. §.§ With third-order term In the presence of a third-order term in the free energy, the phase transition changes to first order. Generically, the free energy takes the form ℱ=α (T-T_ c) Δ^2+bΔ^3+cΔ^4, which undergoes a first-order transition at T̃_ c = T_ c + b^2/(4α c) > T_ c, where the order parameter jumps to a finite value Δ_0=-b/2c. Further, the third-order term lifts the degeneracy between the isotropic solutions. In particular, for λ_4≤0 and β_1<0, the configuration with sign(Δ_1Δ_2Δ_3)>0, referred to as tri-hexagonal ordering for the case of F_1, is favored. When β_1>0, the configuration sign(Δ_1Δ_2Δ_3)<0, the so-called Star-of-David ordering for F_1, is favored. Finally, we can consider the case where λ_4>0 in the presence of a third-order term. In that case, the first-order transition into the |Δ_1|=|Δ_2|=|Δ_3| order is followed by a crossover to |Δ_2|=|Δ_3|≈0<|Δ_1| (or cyclic variations) at lower temperatures. § COUPLED BOND AND FLUX ORDER We now consider the case, where bond order and flux order coexist. Following our analysis in Sec. <ref>, we only have to consider flux orders belonging to the F_2' and F_4' irreps, while for bond order, all three-dimensional irreps have to be considered. Translated into our irrep classification, previous work has studied various combinations of bond and flux order parameters: In Ref. Denner, the authors study the single order parameter combination F_1 and F_4'; Ref. Park considers the combination F_1 and F_2'; Ref. Lin considers F_1 and F_4'; and Refs. Grandi and Christensen2 consider a variety of different order-parameter combinations. We construct the free energy ℱ_i j for coupling bond order F_i with flux order F_j'. In the absence of additional perturbations, there are two cases shown in Tab. <ref>: Firstly, we consider coupling bond order that is even under C_2 (from the F_1 or F_2 irreps) to any flux order (from the F_2' or F_4' irrep). Then, the free energy ℱ^(2)+ℱ^(3,0)+ℱ^(1,2)+ℱ^(4) includes all third-order terms. Secondly, we consider coupling bond order that is odd under C_2 (from the F_3 or F_4 irreps) to any flux order (from the F_2' or F_4' irrep). The corresponding free energy ℱ^(2)+ℱ^(4) consequently has no third-order terms. For both of these two cases, we next present a phase diagram, where we tune the relative strength of the Δ⃗ and Δ⃗' order by tuning their relative critical temperature T_c-T_c'. §.§ Without third-order term Figure <ref>a shows the phase diagram without any third-order terms. This is the case, for example, for ℱ_34. Since Δ⃗ and Δ⃗' are only coupled via the fourth-order term, we can have either order parameter existing alone. The phase diagram splits into three ordered regions that are entered through second-order transitions: one where Δ⃗ exists alone (T_c>T≳ T_c'), one where Δ⃗' exists alone (T_c'>T≳ T_c) and one where both coexist. These three regions are separated by second-order phase transitions. As in the case of pure bond order, the fourth-order term can introduce an anisotropy. The terms with coefficients λ_1, λ_2 and λ_3 do not break the degeneracy between isotropic and anisotropic solutions. λ_4<0 favours an isotropic solution for the bond order, while λ_4>0 favours an anisotropic solution. λ_5 plays the same role for the flux order. λ_6>0 may also introduce anisotropy, but this term is only active if both bond and flux order are present. TRS is broken any time Δ⃗' is non-zero. Note that without a third-order term, there is a symmetry under exchanging Δ⃗ and Δ⃗' (as well as exchanging the corresponding coefficients of the free energy). This explains the left-right symmetry in the phase diagram. §.§ With third-order term Figure <ref>b shows the phase diagram with third-order terms, such as the free energy ℱ_1 2. In this case, a finite Δ⃗' always induces a subsidiary Δ⃗. Unlike the case without third-order terms, where there are three regions and the phase-transition into the coexistence region happens at a singular point (T_ c = T_ c') and is always second order, here the phase diagram only has two regions: one, where Δ⃗ appears alone and TRS is preserved and one, where both order parameters coexist and TRS is broken. As in the pure bond-order case, the transition into the former is always first order with a (potential) additional second-order transition into the coexistence region. The direct transition into the coexistence region changes from first order for T_ c≈ T_ c' and Δ'∼Δ to second order for T_ c' ≫ T_ c and Δ'≫Δ. Within the ordered phase, there is then a crossover between these domains. More details on the transitions can be found in appendix <ref>. Further, the third-order term can induce anisotropy even when the fourth-order term prefers an isotropic solution (λ_n=0 for n>3). For Δ'=0, the solutions are isotropic, but when Δ'≠0, we can have an anisotropic solution with |Δ_i| ≠ |Δ_j| for some i,j, if the order parameter is large enough for the third-order term to be relevant. Finally, the third-order term now breaks the symmetry of exchanging Δ⃗ and Δ⃗', such that the phase diagram is no longer left-right symmetric. § SYMMETRY-BREAKING §.§ Coupling to strain Uniaxial strain has proven to be a very effective perturbation to probe correlated orders in two-dimensional and layered materials. Uniaxial strain was used in layered systems such as the cuprates to probe the charge density wave <cit.> or Sr_2RuO_4 <cit.>, where it lifts the degeneracy of critical temperatures for the superconducting and TRS-breaking states, while the ground state in twisted bilayer graphene changes drastically under strain <cit.>. Indeed, recent experiments indicate that coupling to strain significantly alters the transport properties of kagome metals <cit.> and we therefore include such coupling in our Landau theory. The strain matrix in terms of the displacement field 𝐮 is given by ϵ_αβ=1/2(∂_α u_β+∂_β u_α). The coupling to strain is independent of the order parameters we consider and has the form (see appendix <ref>) ℱ^(str)= μ_2[(ϵ_xx-ϵ_yy)(Δ_1^2-Δ_2^2/2-Δ_3^2/2) +ϵ_xy√(3)(Δ_2^2-Δ_3^2)] +μ_3[(ϵ_xx-ϵ_yy)(Δ_1'^2-Δ_2'^2/2-Δ_3'^2/2) +ϵ_xy√(3)(Δ_2'^2-Δ_3'^2)]. Being quadratic in the order parameters, strain shifts the critical temperatures of different components of the order parameters and as such, it is possible that strain induces an order even though the temperature is too high in the strainless state. For example, it is possible for the strain to increase the critical temperature of one of the components of the flux order and thereby break TRS. Finally, we note that having a first-order transition relies on the third-order term being relevant. The presence of strain favors some of the components of Δ⃗ over others and, therefore, weakens the effect of the third-order term Δ_1Δ_2Δ_3. Strain thus generally weakens the first-order nature of the transition into the ordered state. §.§ Anisotropy As anisotropy we denote the breaking of rotation symmetry in a system. In the context of crystalline systems, where rotation symmetry is already discrete, anisotropy then refers to a further reduction, such as the breaking of C_6 down to C_2. In the context of a charge-density-wave instability with multiple inequivalent wave vectors, an anisotropy can arise from different magnitudes of the individual order-parameter components. Note that in the following, we consider purely in-plane order, since for finite out-of-plane momentum (such as L-point ordering), the rotational symmetry can be trivially broken <cit.>. Strain explicitly breaks rotation symmetry and will thus introduce an anisotropy. The susceptibility towards an anisotropy in a small strain field can thus serve as an indicator for the anisotropy in the system. In particular, the susceptibility of a C_3-breaking order parameter to strain will diverge for an anisotropic ground state. We thus calculate {Δ_1^2-Δ_2^2/2-Δ_3^2/2,√(3)/2(Δ_2^2-Δ_3^2)} to assess whether rotational symmetry is broken and, similarly, for the corresponding order parameter for flux order. In appendix <ref>, we show how the susceptibility is related to this `order parameter' ∑_i,j[(Δ_i^2-Δ_j^2)^2+(Δ_i^'2-Δ_j^'2)^2]. Below, we discuss the conditions for an anisotropy in the solution to the Landau free energy in the cases without and with third-order terms. §.§.§ Without third-order term As shown in Sec. <ref>, pure bond order is anisotropic when the fourth-order term λ_4>0. Similarly, flux order is anisotrpic when λ_5>0. In addition, λ_6>0 leads to an anisotropic solution, if both bond and flux order are non-zero. Note that the non-zero component of the two orders is the same, meaning a solution of the form Δ_i≠0, Δ_i'≠0 is stable. §.§.§ With third-order term Anisotropy can arise from the third-order term in the free energy even when the fourth-order terms favor an isotropic solution. To see this, consider the terms β_1Δ_1Δ_2Δ_3+β_2(Δ_1Δ_2'Δ_3'+Δ_1'Δ_2Δ_3'+Δ_1'Δ_2'Δ_3) as perturbations with Δ^2=Δ_1^2+Δ_2^2+Δ_3^2 and Δ'^2=Δ_1'^2+Δ_2'^2+Δ_3'^2 being fixed by the second order and fourth order terms in the free energy. Let us assume β_1>0 and β_2<0 (the case β_1<0 and β_2>0 can be obtained by flipping the sign of Δ_i) . We can then compare the energies of two extreme cases: For an isotropic solution Δ_1=Δ_2=Δ_3=Δ/√(3) and Δ_1'=Δ_2'=Δ_3'=Δ'/√(3), the third order terms yield an energy |β_1|/3√(3)Δ^3-|β_2|/3√(3)ΔΔ'^2 while the anisotropic solution Δ_1=Δ and Δ_2'=Δ_3'=Δ'/√(2) has energy -|β_2|/2ΔΔ'^2. Therefore the isotropic solution will be favoured when (Δ'/Δ)^2>(3√(3)/2-1)|β_1|/|β_2|. Increasing the ratio Δ'/Δ corresponds to moving to the left in the phase diagram of Figure <ref>b. Similarly, the anisotropic solution gives way to an isotropic solution when Δ is large (see appendix <ref> for details). This leads to the wedge-shaped region in the phase diagram where the solution is anisotropic. §.§ Coupling to a z-axis magnetic field The lowest-order coupling to a magnetic field is linear in field and quadratic in the order parameters. However, only some of the order-parameter combinations couple to a magnetic field at this lowest order. The magnetic field breaks TRS and transforms under the A_2 irrep of C_6v”'. Therefore, for the order parameters to couple in this manner to the magnetic field, we require A_2⊂ F_i⊗ F_j'. If the coupling is allowed (see Tab. <ref>), it takes the form ℱ^(B)=μ_1B(Δ_1Δ_1'+Δ_2Δ_2'+Δ_3Δ_3'). We note that this is an unusual form of magnetic field coupling to (translational-symmetry-breaking) order parameters since the magnetic field couples linearly. This is only possible since we have both TRS-breaking and TRS-preserving orders with the same wavevectors. Importantly, this term fixes the relative sign between Δ⃗ and Δ⃗'. Without a magnetic field, there can be domains with opposite relative signs. Applying a magnetic field causes the domains to flip such that the relative sign is the same everywhere. We note in passing that this form of the coupling to the magnetic field would also be possible if Δ⃗ and Δ⃗' are both L-point orders, but not for a combination of an L-point and M-point order. §.§.§ Without third-order term Figure <ref>a shows the phase diagram in the presence of a magnetic field, when there is no third-order terms in the free energy. The only order-parameter combination that has the lowest-order coupling to a magnetic field, while lacking a third-order term, is F_3 with F_4', in other words the free energy ℱ_34. In this case, the magnetic field couples Δ⃗ and Δ⃗' and hence, the two orders always coexist. The regions Δ≫Δ', Δ∼Δ', and Δ≪Δ' are now separated by crossovers. In the region T_c∼ T_c', the critical temperature is enhanced as both order parameters condense at the same time. While TRS is trivially broken in the entire phase diagram, the magnetic field does not induce an anisotropy if not present already. Note that while in Ref. Tazai the effect of magnetic field in a Landau theory of the kagome metals was considered, only the case without third-order terms was treated. §.§.§ With third-order term Figure  <ref>b, finally, shows the phase diagram in the presence of a magnetic field, when there is a third-order term in the free energy. The only order parameter combination that has both the lowest-order coupling to a magnetic field and a third-order term is F_1 with F_2', in other words the free energy ℱ_1 2. The two order parameters are again coupled and always appear together. The regions Δ≫Δ', Δ∼Δ', and Δ≪Δ' are separated by crossovers. Due to the third-order term, an anisotropy can be induced, if Δ and Δ' both become large enough. In the region where T_c>T_c', adding the magnetic field increases the strength of the flux order, thereby enhancing the anisotropy via the third-order term. This is shown explicitly in appendix <ref>. Again, TRS is trivially broken in the entire phase diagram. § CONSTRAINTS FROM EXPERIMENTS ON AV_3SB_5 While our phenomenological description of charge density orders in kagome metals is valid for any such system, we comment in the following on the consequences of our discussion for the AV_3Sb_5 family. There are several experimental facts, which any theory of the charge-ordered (normal) state should reproduce: * A 2×2 increase in the in-plane unit cell at T_c as observed by X-ray diffraction <cit.> and STM <cit.>. * The transition at T_ c appears to be first order. First, the heat capacity displays a sharp peak at T_ c <cit.>, which is a general feature of a first-order transition. In addition, X-ray scattering <cit.> and NMR <cit.> show discontinuities at T_ c, further suggesting a first-order transition. Finally, transport <cit.> does not see an extended region of fluctuations but a rather abrupt change, especially for out-of-plane conductivity. * TRS is/can be broken below T'<T_ c, as seen in muon spin-relaxation <cit.> and Kerr rotation <cit.>. It is currently uncertain whether this is truly spontaneous TRSB or whether it is a giant response to a small applied magnetic field. In either case, it is clear that the coupling of the order to (out-of-plane) magnetic fields is important: A small field leads to a giant anomalous Hall response <cit.> and a large μSR response <cit.>. Finally, such a magnetic field (linearly) couples the chirality of the ordered state <cit.>. Experimental fact <ref> implies we should consider the translational-symmetry breaking orders of C_6v”', in other words the four three-dimensional irreps denoted as F_i, i=1,… 4. Experimental fact <ref> requires the presence of a third-order term in ℱ. Absent a third-order term, the phase-transitions in the Landau free energy are generically second order. Experimental fact <ref> suggests that the system is at least very tunable towards TRSB order, which should therefore lie close in energy to the ground state. This implies we should consider additional flux order, in other words F_2' and F_4'. In addition, experimental fact <ref> requires a term that (linearly) couples the magnetic field to the order parameters. In our classification, the only order-parameter combination that has third-order terms as well as (linear) coupling to a magnetic field is F_1 with F_2'. There is evidence from experiments and density functional theory that the charge order transforms as the F_1 irrep <cit.>, which supports the conclusion reached above using different means. The nature of the flux order has not been determined yet by other means. § PROPOSALS FOR FUTURE EXPERIMENTS ON AV_3SB_5 §.§ Transport With the order parameter combination F_1 and F_2' we are able to explain the striking experimental results seen in transport in Ref. <cit.>. The experiment reported isotropic transport in the absence of strain, however application of an out-of-plane magnetic field leads to anisotropic transport. If we are in the regime where T_ c≫ T_ c', then only Δ is induced at T_ c and this order is isotropic. However, a magnetic field will induce Δ' at T_ c as well (due to the μ_1 term in the free energy). The β_2 third-order term coupling Δ and Δ' can then lead to anisotropy as outlined in Sec. <ref>. One natural future direction to explore is how closely the transport anisotropy is related to the charge order. It has been shown that with sufficient Nb and Ta doping on the V-site, the chemical pressure can suppress the charge density wave transition down to lower or even zero temperature <cit.>. The report of an isotropic superconducting gap suggests the absence of anisotropy without applying an external field <cit.>, but the field- or strain-induced anisotropy is yet to be explored. Therefore, we propose a complete mapping of anisotropy across the doping phase diagram in order to explore how the field- and strain- dependence of the electronic anisotropy evolves when the charge order is suppressed or absent. Meanwhile, previous experimental results showed that when a magnetic field is applied, the direction of transport anisotropy seems to be pinned to a particular direction most likely due to the small uniaxial strain <cit.>. It is worth checking whether this pinning can be altered by a strong current pulse which may overcome the barrier of the pinning energy. Moreover, the angular dependence of magnetoresistance (MR) seems to suggest a two-fold symmetry which indicates a breaking of the six-fold in plane rotation symmetry <cit.>. Yet, this technique is strongly limited by the misalignment of the magnetic field direction due to the huge magnetoresistance spike with in-plane field <cit.>. Therefore, measuring the angular dependence of the magnetoresistance with a spherical rotation of the magnetic field direction would reveal the true in-plane component of MR with and without an out-of-plane field component. Furthermore, we are working with the assumption that there is no anisotropy in the strain-free case. Elastoresistance measurements such as those in Ref. Nie are interesting in order to quantify the dependence on strain. Most of the theoretical treatment of our work assumes a two-dimensional nature of the charge ordering. One important open question is how much of the physics of the charge density order is two-dimensional. In particular, a possible explanation of the data in Ref. guo2023correlated observing isotropic transport is that while the transport in a given layer is anisotropic, the transport averages over different layers such that one obtains isotropic transport. In order to rule out this scenario, one could probe the transport in a two-dimensional film of the material. Indeed, there has been recent progress in manufacturing thin films of the kagome metals via exfoliation <cit.>. It would be exciting to reach the monolayer limit and measure transport anisotropy in that case. §.§ STM Another approach to detect whether the isotropic transport behaviour in the low strain devices of Ref. guo2023correlated arises due to averaging over many domains/layers would be to perform local measurements of the anisotropy, for example via STM, on ultra-low strain devices. Reference guo2023correlated reports anisotropic transport once a magnetic field is switched on. It would be interesting to perform an STM study (or indeed an X-ray scattering experiment) when the sample is in a magnetic field, in order to establish the source of this anisotropy. However, one should caution that STM is a surface probe and the physics probed at the surface may not necessarily be representative of the physics in the bulk. §.§ TRSB Currently the results on TRSB in Kagome metals are still controversial, in particular, there is no consensus on whether there is spontaneous TRSB in zero magnetic field or whether the TRSB is induced by the small training fields used in experiments. In terms of the probes used to investigate (spontaneous) TRSB in the Kagome metals, there have been both Kerr effect and muon spin rotation experiments. Another probe that could be used to detect the circulating currents predicted in these materials is neutron scattering. This technique has been successfully used to detect the loop currents in the pseudogap phase of the cuprates <cit.>. Our Landau theory as well as recent experiments <cit.> show that kagome metals can be very sensitive to strain. While these experiments showed that there was no observable dependence of the charge-ordering temperature on the strain, it has yet to be established whether the TRSB temperature depends on strain. §.§ Stiffness measurements Transport experiments have revealed that the anisotropy in strain-free samples increases when a magnetic field is applied and this should yield observable signatures in measurements of elements of the stiffness matrix, as we elaborate on below. In addition to the coupling between the strain and the order parameters, there is also a contribution to the free energy coming from the elastic energy itself. Using the Voigt notation for the strain tensor, i.e., introducing the six-dimensional vector ϵ=(ϵ_xx,ϵ_yy,ϵ_zz,2ϵ_yz,2ϵ_xz,2ϵ_xy), we can write the elastic energy as ℱ^(ϵ)=1/2∑_αβc_αβϵ_αϵ_β, where c_αβ is the stiffness matrix. For a system with D_6h symmetry, there are five independent components of the stiffness matrix <cit.>: c_11,c_12,c_13,c_33,c_44. In general, the components of the stiffness matrix can be discontinuous at a second-order phase transition, which can be measured in experiments. The discontinuities are given by the formula <cit.> Δ c_αβ=∑_γδ∂^2ℱ^(str)/∂ϵ_α∂ D_γ∂^2ℱ^(str)/∂ϵ_β∂ D_δ(∂^2ℱ/∂ D_γ∂ D_δ)^-1, where D⃗=(Δ⃗,Δ⃗') is the six-dimensional vector combining the order parameters. As shown in Ref. Grandi, at the onset of isotropic charge ordering, there will be discontinuities Δ c_11, Δ c_12,Δ c_13,Δ c_33. On the other hand, at the onset of anisotropic order, there will be further independent components of the stiffness matrix, which are discontinuous, namely Δ c_22≠Δ c_11 and Δ c_13≠Δ c_23. Therefore, one interesting experiment would be to measure Δ c_22-Δ c_11 and Δ c_13-Δ c_23 in a pristine (strain-free) sample as a function of an applied magnetic field. Resonant ultrasound spectroscopy is a method used to measure the elements of the elastic stiffness matrix <cit.>. It would be interesting to perform these experiments as a function of the applied magnetic field, though resonant ultrasound spectroscopy in a magnetic field is challenging and it is possible that pulse-echo sound velocity measurements are more realistic. § CONCLUSION Kagome metals are known to undergo charge ordering with a 2× 2 increase in the size of the unit cell. Using a group theory analysis, we write down all possible site, bond and flux ordering that is consistent with the system symmetries. The observation of TRSB and coupling to a magnetic field further motivates studying TRSB flux order. Flux and bond order are natural partners since they are the imaginary and real part of the same underlying order parameter and so in general, we expect these two orders to be coupled. This leads us to study all possibilities for the coupling of flux and bond order. We use a Landau analysis to study the interplay of the two orders. The different types of flux and bond order lead to differences in the third order term in the Landau theory as well as differences in the coupling to the magnetic field. We construct the phase diagrams for different types of coupled flux and bond order. Depending on the type of order, the two order parameters corresponding to flux and bond order may appear separately or always in unison. By synthesizing various experimental results and comparing to the Landau theory phase diagrams, we are able to deduce that the most likely candidate is a tri-hexagonal or Star of David bond order with a subsidiary C_2-preserving flux order. § ACKNOWLEDGEMENTS We thank F. Grandi and M. Christensen for comments on a previous version of the manuscript. We also thank R. Fernandes for helpful discussions. This project was supported by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program grant nos. ERC-StG-Neupert-757867-PARATOP (GW, TN) and 715730 (CG, PJWM). TN acknowledges funding from the Swiss National Science Foundation (Project 200021E 198011) as part of the QUAST FOR 5249-449872909 (Project P3). unsrtnat — Supplementary Material — Phenomenology of bond and flux orders in kagome metals Glenn Wagner, Chunyu Guo, Philip J. W. Moll, Titus Neupert, Mark H. Fischer § GROUP THEORY DETAILS For the site order, we compute the permutation matrix 𝒫_s(g) that describes how the sites transform into each other under the operation g∈ C_6v”'. Since there are 12 sites in the 2×2 enlarged unit cell, the set of permutation matrices describe a 12-dimensional reducible representation of C_6v”'. In order to decompose this representation into irreps, we compute the characters χ(g)=Tr(𝒫_s(g)), in other words, we count the number of sites that map to themselves under the operations of C_6v”'. We find We can then compute the multiplicity n_R of the irrep R in the decomposition of this reducible represention via n_R=1/|C_6v”'|∑_g∈ C_6v”'χ_R(g)χ(g), where the characters χ_R(g) are listed in the character table (Tab. <ref>). This leads to the decomposition of site order 𝒫_s=A_1+E_2+F_1+F_3+F_4. The bond and flux order can be treated similarly. § MULTIPLICATION TABLE OF IRREPS OF C_6V”' We want to decompose the product of two irreps R_i and R_j into a sum of irreps R_i⊗ R_j=⊕_kn_kR_k, where n_k=1/|C_6v”'|∑_g∈ C_6v”'χ_k(g)χ_i(g)χ_j(g). We list the results in Tab. <ref>. § COUPLING TO STRAIN The crucial symmetry to determine the coupling to strain is C_3. The components of the strain tensor transform in the E irrep, which transforms non-trivially under C_3, and so in order to obtain a scalar (i.e. a term that we can add to the free energy), we need to construct a term out of the order parameter that also transforms under the E irrep. The basis functions of the E irrep are conventionally labelled as p_±=p_x± ip_y which pick up phases of ω=e^2π i/3 and ω^* respectively under C_3. Looking at the transformation properties of the order parameter under C_3, we find that the quadratic terms that transform correctly are p_+ =Δ_1^2+ωΔ_2^2+ω^2Δ_3^2, p_- =Δ_1^2+ω^2 Δ_2^2+ωΔ_3^2, where ω=e^2π i/3. Then we write these as p_±=(p_x± ip_y) with p_x = Δ_1^2- Δ_2^2/2- Δ_3^2/2, p_y =√(3)/2(Δ_2^2- Δ_3^2). One can check that the doublet {p_x,p_y} transforms in the same way under C_3 as {ϵ_p_x,ϵ_p_y}={(ϵ_xx-ϵ_yy)/2 ,ϵ_xy}, allowing us to construct the C_3-symmetric term to be added to the free energy: ℱ^(str)=2μ_2(ϵ_p_xp_x+ϵ_p_yp_y) =μ_2[(ϵ_xx-ϵ_yy)(Δ_1^2-Δ_2^2/2-Δ_3^2/2)+ϵ_xy√(3)(Δ_2^2-Δ_3^2)]. The analogous term for flux order automatically respects time-reversal symmetry since it is quadratic in the order parameter and hence flux order couples to strain in the same manner (with a different coupling coefficient). The coupling to strain gives us a way to quantify the anisotropy in the system. For an isotropic phase, the order parameters p_x and p_y will be zero. If the solution is anisotropic, there will be degenerate solutions with different values of p_x and p_y and application of strain will pick out one of these degenerate solutions leading to divergent susceptibility. Let us consider the response to strain in the system at finite temperature T. The expectation value of the symmetry-breaking order parameter is ⟨ p_x⟩_ℱ(ϵ_p_x),T=⟨ p_xe^-βℱ^(str)⟩_0,T=⟨ p_xe^-2βμ_2ϵ_p_xp_x⟩_0,T, where ⟨…⟩_0,T denotes the finite-temperature average with respect to the zero-strain free energy ℱ(ϵ_p_x=0) and β=1/(k_BT). The susceptibility is then χ_p_x=lim_ϵ_p_x→0∂⟨ p_x⟩_ℱ(ϵ_p_x),T/∂ϵ_p_x=-2βμ_2⟨ p_x^2⟩_0,T, and similarly χ_p_y=-2βμ_2⟨ p_y^2⟩_0,T. The susceptibilities diverge when T→0 if p_x^2 or p_y^2 acquire a finite expectation value in the ground state. This is the signature of spontaneous symmetry breaking. This motivates us to introduce the anisotropic order parameter 1/βTrχ= 1/β(χ_p_x+χ_p_y)=-2μ_2(p_x^2+p_y^2)=-4βμ_2∑_i,j(Δ_i^2-Δ_j^2)^2 and an analogous order parameter can be written down for the flux order. § FIRST-ORDER TRANSITION When T_c∼ T_c', there is a first-order transition into a phase with both charge and flux order. Let us consider the free energy ℱ =α(T-T_c)Δ^2+α(T-T_c')Δ'^2+bΔΔ'^2+c(Δ^2+Δ'^2)^2=Aψ^2+Bψ^3+Cψ^4 where we used the parametrization Δ+iΔ'=ψ e^iϕ and A= α(T-T_c)cos^2ϕ+α(T-T_c')sin^2ϕ=α[T-T_ccos^2ϕ-T_c'sin^2ϕ] B= bcosϕsin^2ϕ C= c. The minimum of the free energy is at a non-zero value of the order parameter when A<B^2/4C i.e. when T<T_1=T_ccos^2ϕ+T_c'sin^2ϕ+b^2/4α ccos^2ϕsin^4ϕ=T_c+(T_c'-T_c)[sin^2ϕ+η(sin^4ϕ-sin^6ϕ)], where η=b^2/[4α c(T_c'-T_c)]. We will have a first-order transition, when there is a non-zero solution at T>T_c,T_c'. Consider first the case where T_c'>T_c. Maximizing the term in square brackets leads to max_ϕ(T_1)=T_c+(T_c'-T_c)(η+√(η(3+η)))(6+η+√(η(3+η)))/27η, when sin^2ϕ=(η+√(η(3+η)))/(3η)≤1. We have max_ϕ(T_1)>T_c' when η>1. Similarly, when T_c>T_c', we can write T_1=T_c'+(T_c-T_c')[cos^2ϕ-ηcos^2ϕ(1-cos^2ϕ)^2]. Maximizing with respect to ϕ yields max_ϕ(T_1)=T_c'+(T_c-T_c')2(2η+√(η(3+η)))(3-η+√(η(3+η)))/27η, and max_ϕ(T_1)>T_c when η<-4. So we have a first-order transition in a range where the critical temperatures of the two orders are similar: b^2/4α c>(T_c'-T_c)>-b^2/16α c. § ANISOTROPY FROM THIRD-ORDER TERM Anisotropy can arise from the third-order term in the free energy even when the fourth-order terms favor an isotropic solution. To see this, consider the terms ℱ^(3)=β_1Δ_1Δ_2Δ_3+β_2(Δ_1Δ_2'Δ_3'+Δ_1'Δ_2Δ_3'+Δ_1'Δ_2'Δ_3) as perturbations with Δ^2=Δ_1^2+Δ_2^2+Δ_3^2 and Δ'^2=Δ_1'^2+Δ_2'^2+Δ_3'^2 being fixed by the second-order and fourth-order terms in the free energy. The results derived below will therefore be exact in the limit β_1→0, β_2→0, though the numerics demonstrate that the results are similar for finite β_1,β_2. We study the case β_1β_2<0. The signs of β_1 and β_2 can always be flipped by changing the sign of Δ_i, so without loss of generality, we assume β_1>0 and β_2<0. We now study the competition between four different solutions of the free energy, which will be the ground states as we traverse the phase diagram in Fig. <ref> from left to right. * Solution 1: Δ_1=Δ_2=Δ_3=Δ/√(3) and Δ_1'=Δ_2'=Δ_3'=Δ'/√(3) leads to ℱ^(3)=|β_1|/3√(3)Δ^3-|β_2|/√(3)ΔΔ'^2 * Solution 2: Δ_3=Δ and Δ_1'=Δ_2'=Δ'/√(2) leads to ℱ^(3)=-|β_2|/2ΔΔ'^2 * Solution 3: Δ_1=-Δ_2=Δ_3=Δ/√(3) and Δ_1'=Δ_2'=Δ'/√(2) leads to ℱ^(3)=-|β_1|/3√(3)Δ^3-|β_2|/2√(3)ΔΔ'^2 * Solution 4: Δ_1=-Δ_2=Δ_3=Δ/√(3) and Δ_i'=0 leads to ℱ^(3)=-|β_1|/3√(3)Δ^3 Let us further assume second-order terms ℱ^(2)=α(T-T_c)Δ^2+α(T-T_c')Δ'^2 and fourth-order terms ℱ^(4)=λ(Δ^4+Δ'^4) such that Δ^2=α(T_c-T)/2λ, Δ'^2=α(T_c'-T)/2λ. Comparing the energies of solution 1 and solution 2, we find that the transition occurs at T^*=T_c-ζ T_c'/1-ζ, where ζ=3(1-√(3)/2)|β_2|/|β_1|. We plot this transition temperature as a blue line in Fig. <ref>. Solutions 1 and 4 are isotropic, while solutions 2 and 3 are anisotropic. § ANISOTROPY IN THE PRESENCE OF A MAGNETIC FIELD The magnetic field adds a term to the free energy ℱ^(B)=μ_1B(Δ_1Δ_1'+Δ_2Δ_2'+Δ_3Δ_3'). The effect of the magnetic field is to induce Δ' as soon as Δ is present. Let us assume without loss of generality B<0 (we can always switch the sign of Δ_i' to adapt to the sign of B). Of the four solutions, the only solution that can take advantage of ℱ^(B) and lower its energy is solution 1, which has a contribution -BΔΔ'. None of the other solutions have changes in energy to this order in the perturbation theory. Therefore solution 1 becomes more stable and the boundary between solution 1 and solution 2 shifts to the right, this is shown in Fig. <ref>. The result of this effects is that the magnetic field can increase the anisotropy of the order parameters, as shown in Figs. <ref> and <ref>. In particular, due to the phase boundary between solution 1 and solution 2 shifting to the right, the value of Δ is larger, when the components become anisotropic and this results in a larger anisotropy. We note that if we are in the part of the phase diagram where T_c<T_c' then the magnetic field has the opposite effect and suppresses the anisotropy (Fig. <ref>).
http://arxiv.org/abs/2307.03280v1
20230706203125
Neural network decoder for near-term surface-code experiments
[ "Boris M. Varbanov", "Marc Serra-Peralta", "David Byfield", "Barbara M. Terhal" ]
quant-ph
[ "quant-ph" ]
Corresponding author: b.m.varbanov@tudelft.nl Neural-network decoders can achieve a lower logical error rate compared to conventional decoders, like minimum-weight perfect matching, when decoding the surface code. Furthermore, these decoders require no prior information about the physical error rates, making them highly adaptable. In this study, we investigate the performance of such a decoder using both simulated and experimental data obtained from a transmon-qubit processor, focusing on small-distance surface codes. We first show that the neural network typically outperforms the matching decoder due to better handling errors leading to multiple correlated syndrome defects, such as Y errors. When applied to the experimental data of [Google Quantum AI, Nature 614, 676 (2023)], the neural network decoder achieves logical error rates approximately 25% lower than minimum-weight perfect matching, approaching the performance of a maximum-likelihood decoder. To demonstrate the flexibility of this decoder, we incorporate the soft information available in the analog readout of transmon qubits and evaluate the performance of this decoder in simulation using a symmetric Gaussian-noise model. Considering the soft information leads to an approximately 10% lower logical error rate, depending on the probability of a measurement error. The good logical performance, flexibility, and computational efficiency make neural network decoders well-suited for near-term demonstrations of quantum memories. Neural network decoder for near-term surface-code experiments Barbara M. Terhal August 1, 2023 ============================================================= § INTRODUCTION Quantum computers are anticipated to outperform classical computers in solving specific problems, such as integer factorization <cit.> and quantum simulation <cit.>. However, for a quantum computer to perform any meaningful computation, it has to be able to execute millions of operations, requiring error rates per operation lower than 10^-10 <cit.>. Despite a valiant experimental effort aimed at enhancing operational performance, state-of-the-art processors typically exhibit error rates per operation around 10^-3 <cit.>, which is far from what is needed to perform any useful computation. Fortunately, quantum error correction (QEC) provides a means to reduce the error rates, albeit at the cost of additional overhead in the required physical qubits <cit.>. Two-dimensional stabilizer codes <cit.>, such as the surface codes <cit.>, have emerged as a prominent approach to realizing fault-tolerant computation due to their modest connectivity requirements and high tolerance to errors <cit.>. These codes encode the logical information into an array of physical qubits, referred to as data qubits. Ancilla qubits are used to repeatedly measure parities of sets of neighboring data qubits. Changes between consecutive measurement outcomes, which are typically referred to as syndrome defects, indicate that errors have occurred. A classical decoder processes this information and aims at inferring the most likely correction. The increased number of available qubits <cit.> and the higher fidelities of physical operations <cit.> in modern processors have enabled several experiments employing small-distance codes to demonstrate the capacity to detect and correct errors <cit.>. In a recent milestone experiment, the error rate per QEC round of a surface-code logical qubit was reduced by increasing the code distance <cit.>, demonstrating the fundamental suppression achieved by QEC. The performance of the decoder directly influences the performance of a QEC code. Minimum-weight perfect matching (MWPM) is a good decoding algorithm for the surface code, which is computationally efficient and, therefore, scalable <cit.>. Its good performance is ensured under the assumption that the errors occurring in the experiment can be modeled as independent X and Z errors <cit.>. This leads to the MWPM decoder performing worse than decoders based on belief propagation <cit.> or a (more computationally-expensive) approximate maximum-likelihood decoder based on tensor-network (TN) contraction <cit.>. A more practical concern is that a decoder relies on a physical error model to accurately infer the most likely correction. Typically, this requires constructing an approximate model and a series of benchmarking experiments to extract the physical error rates. While there are methods to estimate the physical error rates based on the measured defects <cit.>, they typically ignore non-conventional errors like crosstalk or leakage. The presence of these errors can impact both the accuracy with which the physical error rates are estimated from the data and the performance of the decoder itself <cit.>. An alternative approach to decoding is based on using neural networks (NN) to infer the most likely correction given a set of measured defects <cit.>. These decoders do not require any prior information about the error model and therefore alleviate the need to construct any error model, making them highly adaptable. This flexibility comes at the cost of requiring a significant amount of data for training the network and optimizing the hyper-parameters to ensure that the optimal performance of the decoder is reached during training. Despite the potential issues during the training, it has been shown that they can match and generally exceed the performance of MWPM decoders, in several cases achieving near-optimal performance <cit.>. Depending on the NN architecture employed, these decoders can be scalable and run in real time <cit.>. While decoders based on recurrent NNs are more computationally expensive, they enable the decoding of experiments performing a variable number of stabilizer measurement rounds <cit.>, making them well-suited for decoding near-term memory <cit.> and stability experiments <cit.>. In this work, we assess the performance of a neural-network decoder using both simulated and experimental data. Our work goes beyond <cit.> and previous NN decoding works in applying and partially training a NN decoder for the first time on data from a surface-code experiment <cit.>, thus capturing realistic performance and showing the versatility of NN decoders. In addition, we go beyond <cit.> in training the NN decoder for a distance-7 surface code and extract its exponential error suppression factor on simulated data. Thirdly, we show that our NN decoder can be trained with (simulated) soft measurement data and get a performance enhancement. We begin by simulating the performance of a d=3 surface code using a circuit-level noise model to show that the NN decoder outperforms MPWM by learning to deal with Y errors, as previous studies have suggested <cit.>. Next, we investigate the performance of the NN decoder when applied to data from a recent surface code experiment <cit.>. Due to the limited volume of available experimental data, we train the NN decoder on simulated data generated using an error model based on the measured physical error rates. However, we evaluate the decoder's performance on simulated and experimental data. The NN decoder significantly outperforms MWPM when decoding simulated data and achieves a lower logical error rate for the d=5 code than the constituent d=3 codes. When evaluated on experimental data, the NN decoder achieves a performance approaching that of a tensor-network decoder, which approximates a maximum-likelihood decoder. However, contrary to the finding in <cit.>, the logical error rate observed in the d=5 experiment is higher than the average of each of the d=3 experiments, which we attribute to either a sub-optimal choice of hyper-parameters or the mismatch between the simulated data that the decoder was trained on and the experimental data. To further explore the performance of NNs, we consider the continuous information available in the measurement outcomes of transmon qubits <cit.>, typically referred to as soft information <cit.>. By calculating the defect probabilities given the soft outcomes and providing them to the neural network during training and evaluation, we demonstrate that the soft decoder can achieve an approximately 10% lower logical error rate if the measurement error probability is sufficiently high. § BACKGROUND §.§ The surface code A (rotated) surface code encodes a single logical qubit into a two-dimensional array of n = d× d physical qubits, referred to as data qubits, where d is the distance of the code. The logical state of the qubit is determined by the stabilizers of the code, which are the weight-four or weight-two X-type (blue plaquettes) or Z-type (green plaquettes) Pauli operators, see <ref>. In addition to the stabilizers, the code is given by a pair of anti-commuting logical operators, X_L and Z_L, which commute with the code stabilizers. The stabilizers are typically measured indirectly with the help of n-1 ancilla qubits. To perform this measurement, each ancilla coherently interacts with its neighboring data qubits in a specific order <cit.>, after which the ancilla qubit is measured and reset. The stabilizer measurement outcomes are typically referred to as the syndromes and hold information about the errors that have occurred. The full circuits used to perform these measurements are shown in <ref>. In particular, we use the circuits used in <cit.>, which feature several echo gates used for dynamical decoupling in the experiment, see <ref> for additional details. To characterize the performance of the code, we perform a series of logical memory experiments. In each experiment, the physical qubits are prepared in an eigenstate of either the X_L (resp. Z_L) logical operator, after which N-1 rounds of stabilizer measurements are executed. The experiment is concluded by reading out each data qubit in the X (resp. Z basis), which also performs a logical X_L (resp. Z_L) measurement. The goal of each experiment is to maintain the logical state for as many QEC rounds as possible by using error correction, see <ref> for more details. The information about errors is contained in the stabilizer measurement outcome m_r, a of ancilla a at round r. The final data qubit measurements can also be used to infer a final set of outcomes m_r=N, a for either the X-type or Z-type stabilizers. The defects d_r, a = m_r, a⊕ m_r - 1, a isolate the changes in m_r, a such that an error is signaled by an observation of one or more d_r, a = 1. The choice of initial state and the dynamical decoupling gates can also flip some of the measured m_r, a, which is accounted for when calculating d_r, a. A decoder processes the observed d_r,a to infer a correction for the measured logical observable. By repeating each experiment many times, we extract the probability of a logical error p_L( r ) at QEC round r, from which we calculate the logical fidelity F_L( r ) = 1 – 2p_L( r ), which decays exponentially with the number of executed QEC rounds. We model this decay as F_L( r ) = ( 1 – 2ε_L )^r-r_0, where ε_L is the logical error rate per QEC round and r_0 is a fitting constant. When fitting the decay of F_L( r ) to extract ε_L, we start the fit at r=3 to avoid any time-boundary effects that might impact this estimate. §.§ Error models To explore the performance of the NN decoder, we perform simulations using circuit-level Pauli-noise models. For most of our simulations, we consider a depolarizing circuit-level noise, which is defined as * After each single-qubit gate or idling period, with a probability p/3, we apply an error drawn from {X, Y, Z}. * After each two-qubit gate, with a probability p/15, we apply an error drawn from {I, X, Y, Z}^⊗ 2∖{ II }. * With a probability p, we apply an X error before each measurement. * With a probability p, we apply an X error after each reset operation or after the qubits are first prepared at the start of an experiment. In some of our simulations, we consider noise models that are biased to have a higher or a lower probability of applying Y errors. To construct this model, we define a Y-bias factor η and modify the standard depolarizing circuit-level noise model, as follows: * After each single-qubit gate or idling period, there is a probability η p / (η + 2) to apply a Y error and a probability p / (η + 2) to apply an X or a Z error. * After each two-qubit gate, there is a probability η p / (7η + 8) of applying an error drawn from 𝒫_B = { IY, XY, YI, YX, YY, YZ, ZY} and a probability p / (7η + 8) of applying an error drawn from {I, X, Y, Z}^⊗ 2∖ (𝒫_B∪{ II}). This biased error model is a generalization of the depolarizing model. In particular, choosing η = 1 makes this noise model equivalent to the depolarizing one. On the other hand, when η = 0, the model leads to only X or Z errors applied after operations. In the other limiting case, as η→∞, the model applies only Y errors after idling periods and gates. Given that the error probability is the same across all operations of the same type, we will refer to these error models as uniform circuit-level noise models. Finally, we also perform simulations of the recent experiment conducted by Google Quantum AI, using the error model which they provided together with the experimental data <cit.>. This is once again a circuit-level Pauli-noise model similar to the ones presented above, but the probability of a depolarizing error after each operation is based on the measured physical error rates. We will refer to this model as the experimental circuit-level noise model. We use stim <cit.> to perform the stabilizer simulations. We have written a wrapper package that helps with constructing the circuit for each experiment, which is available in <cit.>. We use pymatching <cit.> for the MWPM decoding. The weights used in the MWPM decoder are directly extracted from the sampled circuit using the built-in integration between stim and pymatching. §.§ Neural network architecture Here we describe the NN architecture that we employ in this work, which nearly exactly follows the one proposed in <cit.>. Many NN decoders studied previously are based on feed-forward or convolutional NN architecture. These decoders can generally decode experiments running a fixed number of QEC rounds. Decoders based on recurrent NN architectures, on the other hand, can learn the temporal correlations in the data, allowing them to directly process experiments performing a variable number of QEC rounds. We have used the TensorFlow library <cit.> to implement the NN architecture, with the source code of the decoder available in <cit.>, the parameters used for each training are listed in <ref>, while the scripts that perform the training are available upon request. The NN architecture takes as input the defects d_a,r with r = 1, 2, …, N. The decoder solves a binary classification problem and determines whether a correction of the logical observable is required based on the observed defects. In practice, the architecture is based on a two-headed network that makes two predictions p_main and p_aux, which are used to improve the training of the network, see Fig. <ref>. To train a decoder, a series of memory experiments are performed. Since the logical qubit is prepared in a known logical state and measured at the end of each experiment, it is possible to extract the actual value p_true∈{0,1} of whether a correction is required or not. In particular, the cost function I that the network attempts to minimize during training is the weighted sum of the binary cross-entropies between each prediction and p_true, expressed as I = H(p_main, p_true) + w_a H(p_aux, p_true), where w_a is a weight that is typically chosen as w_a = 0.5 in our runs, while H(p_i, p_j) = - p_ilog p_j - (1 - p_i)log (1 - p_j) is the binary cross-entropy function. The choice behind this loss function is elaborated below. <ref> schematically illustrates the architecture of the recurrent network. The recurrent body of the neural network consists of two stacked long short-term memory (LSTM) layers. Each LSTM layer is defined by a pair of internal memory states: a short-term memory, referred to as the hidden state, and a long-term memory, referred to as the cell state. Here, we use the same internal states size N_L for both LSTM layers <cit.>, with N_L = 64, 96, 128 for surface codes of distance d=3,5,7, unless otherwise specified. The LSTM layers receive the defects for each QEC round as input, calculated from both the X-type and the Z-type stabilizer measurement outcomes. The first LSTM layer outputs a hidden state for each QEC round, which is then provided as input to the second LSTM layer, which outputs only its final hidden state. A rectified linear unit (ReLU) activation function is applied to the output of the second LSTM layer before being passed along to each of the two heads of the network. The heads of the network are feed-forward evaluation networks consisting of a single hidden layer of size N_L using the ReLU activation function and an output layer using the sigmoid activation function, which maps the hidden layer output to a probability used for binary classification. The output of the recurrent part of the network is directly passed to the lower head of the network, which uses this information to predict a probability p_aux of a logical error. The upper head also considers the defects inferred from the data qubit measurements, which are combined with the recurrent output and provided as input. Therefore, unlike the lower head, the upper one uses the full information about the errors that have occurred when making its prediction p_main of whether a logical error occurred. Both p_main and p_aux are used when training the network, which helps the neural network to generalize to handle longer input sequences. However, only p_main is used when evaluating the performance of the decoder. We provide additional details about the training procedure in <ref> and list the hyper-parameters of the network in <ref>. § RESULTS §.§ Performance on circuit-level noise simulations We first demonstrate that the NN decoder can achieve a lower logical error rate than the MWPM decoder by learning error correlations between the defects, which are otherwise ignored by the MPWM decoder. We consider the Y-biased circuit-level noise model described previously, parameterized by the bias η towards Y errors and a probability p = 0.001 of inserting an error after each operation. We use this noise model to simulate the performance of a d=3 surface-code quantum memory experiment in the Z-basis, initially preparing either |0⟩^⊗ n or |1⟩^⊗ n. To train the NN decoder, we generated datasets of r = 1, 5, … , 37 QEC rounds, sampling 5×10^5 shots for each round and initial state. When evaluating the decoder's performance, we simulate the code performance over r = 10, 30, … , 290 QEC rounds and sample 2×10^4 shots instead. To benchmark the logical performance, we calculate the logical fidelity F_L at the end of each experiment. Averaging F_L over each initial state, we fit the exponential decay of F_L with the number of QEC rounds to extract the logical error rate per round ε_L. <ref> shows that the NN decoder maintains a constant ε_L when evaluated on datasets going up to 300 QEC rounds, demonstrating the ability of the decoder to generalize to significantly longer sequences than those used for training. On the other hand, the NN decoder achieves about 20% lower ε_L compared to the MWPM decoder. We then evaluate the trained NN decoder on simulated data using η∈{0, 0.5, 1, 2, 10, 100} and keep all other parameters the same without training any new neural networks, with the resulting error rates shown in <ref>b. At η = 0, corresponding to an error model leading to X and Z errors, the NN decoder displays a higher ε_L than the MWPM decoder. For η≥ 0.5, the NN decoder instead demonstrates a lower logical error, with the relative reduction increasing with the bias. This demonstrates that the NN decoder can achieve a lower logical error rate by learning the correlations between the defects caused by Y errors, consistent with the results presented in <cit.>. The NN decoder can achieve an even lower logical error rate at a bias of η=100 by being trained on a dataset generated using this bias (referred to as the adapted NN decoder in <ref>). On the other hand, training a model for η = 0 does not lead to any improvement in ε_L of the NN decoder, showing that the MWPM decoder is more optimal in this setting. §.§ Performance on experimental data Next, we evaluate the performance of the NN decoder on experimental data available from the recent experiment executed by Google Quantum AI <cit.>, where a 72-qubit quantum processor was used to implement a d=5 surface code as well as the four d=3 surface codes which use a subset of the qubits of the larger code. The stabilizer measurement circuits used in that experiment are the same as those shown in <ref>fig:circuit. For each distance-d surface code, the data qubits are prepared in several random bitstrings, followed by r=25 rounds of stabilizer measurement, followed by a logical measurement, with experiments performed in both the X-basis and Z-basis. The experiment demonstrated that the d=5 surface code achieves a lower ε_L compared to the average of the four constituent d=3 patches when using a tensor-network (TN) decoder, an approximation to a maximum-likelihood decoder. We find that training a NN decoder to achieve good logical performance requires a large number of shots (approximately 10^7 in total or more) obtained from experiments preparing different initial states and running a different number of rounds. As the amount of experimental data is too small to train the NN decoder (the total number of shots being 6.5× 10^5), we instead opt to simulate the experiments using the Pauli error model based on the measured error rates of each operation, available in <cit.>. Keeping the same number of rounds and prepared state, we generate a total of 2×10^7 shots for training the decoder for each d=3 experiment and 6×10^7 to train the decoder for the d=5 experiment, see <ref>. While we train the network on simulated data, we still evaluate the decoder performance on both simulated and the experimental data, with the results shown in <ref>fig:google_performancea and <ref>fig:google_performanceb respectively. Both the training and evaluation data consist of r = 1, 3, … , 25 rounds of QEC and consider the same initial states. When evaluating the NN decoder on simulated data, we observe that the d=5 code achieves a lower ε_L compared to the average of the d=3 codes, see <ref>fig:google_performancea. Evaluating the decoder on the experimental data leads to an approximately 15% (40%) higher ε_L for the d=3 (d=5) code, demonstrating that the approximate error model used in simulation fails to fully capture the errors in the experiment. Furthermore, we observe that the d=5 has a higher ε_L instead, see <ref>fig:google_performancea, contrary to what was demonstrated in <cit.> using a tensor-network decoder. In order to put the performance of the NN decoder in perspective, in <ref>fig:error_rate_comparison, we compare the logical performance of the NN decoder to the performance of several other decoders that were also implemented in <cit.>. We perform this comparison both on simulated (see <ref>fig:error_rate_comparisona) and experimental (see <ref>fig:error_rate_comparisonb) data. We find that the NN decoder consistently outperforms the standard MWPM decoder in either case. On the experimental dataset, the NN decoder performs equivalent to the TN decoder when decoding the d=3 surface codes. However, when decoding the d=5 surface code experiment, the NN decoder displays a higher ε_L than the TN decoder and the computationally efficient belief-matching (BM) decoder <cit.>. When evaluated on simulated data, the NN and BM decoders exhibit similar error rates, with the NN decoder again demonstrating better performance when decoding the d=3 code but worse when dealing with the d=5 code. The BM decoder we use for the simulated data is described in <cit.> and uses the belief propagation implemented in <cit.>. The higher error rate of the NN decoder for the d=5 code in both simulation and experiment can be related to the difficulty of optimizing the performance of the substantially larger NN model used (see <ref> for the model hyper-parameters). However, the discrepancy in the experiment can also be attributed to a mismatch between the simulated data used for training (based on an approximate error model) and the experimental data used for evaluation. Compared to the d=3 surface code data, the accumulation of qubit leakage can cause the d=5 performance to degrade faster over the QEC rounds <cit.>. We expect that training on experimental data and a better hyper-parameter optimization to enable a NN performance comparable to state-of-the-art decoders like BM and TN while offering additional flexibility to the details of the noise model. Compared to the TN decoder, both NN and BM can achieve similar logical performance while remaining significantly faster, and if their implementation is optimized, they can potentially be used to decode experiments in real time. §.§ Logical error rate suppression An exponential suppression of the logical error rate, assuming that the physical error rates are below `threshold', is vital for realizing a fault-tolerant quantum computer. We explore the error suppression achieved when using the NN decoder. We characterize the logical performance of d = 3, 5, 7 surface codes simulated using a uniform depolarizing circuit-level noise model with an error probability of p=0.1%, close to the state-of-the-art physical error rates achieved in the experiment. To train the NN decoder, we use data generated using this error probability. We find that also training using a higher probability of p=0.2% leads to a significantly lower logical error rate for the d=7 code. Furthermore, we evaluate the performance of the NN decoder on data simulated using p=0.05%, which is an example of the physical error rate needed to achieve practical sub-threshold scaling of the error rate. For each distance d and error probability p, we perform simulations of memory experiments in the Z-basis with varying numbers of QEC rounds, going up to 600 rounds for the d=7 code with an error rate of p=0.05% to extract the logical error per round ε_L. The logical error rates obtained when using an MWPM decoder are shown in <ref>a, while those achieved by the NN decoder are shown in <ref>b. If the physical error rate is below threshold, ε_L is expected to decay exponentially with the code distance d, following ε_L(d) = C / Λ^(d+1)/2, where Λ is the suppression factor and C is a fitting constant. The data shows an apparent exponential suppression of the error rates by either decoder for the considered error rates, which we fit to extract the suppression factor Λ, shown in <ref>. In either case, the NN decoder achieves better logical performance compared to the MWPM decoder. While for p=0.1%, the NN decoder achieves an approximately 13% higher Λ, for p=0.05%, the more accurate NN decoder leads roughly twice as high Λ. The higher suppression factors Λ obtained from using better decoders significantly reduces the code distance required to achieve algorithmically-relevant logical error rates. For example, for an error rate of p=0.05%, realizing ε_L ≈ 10^-10 would require a d=19 surface code when using the MWPM decoder and d=15 when using the NN decoder, corresponding to roughly 40% less physical qubits required. However, whether the NN can continue to exhibit similar performance when decoding higher distance codes remains to be demonstrated. §.§ Decoding with soft information Measurements of physical qubits generally produce a continuous signal that is subsequently converted into declared binary outcomes by classical processing and thresholding. For example, transmon qubits are dispersively coupled to a dedicated readout resonator, which itself is connected to a readout feedline. Readout is performed by applying a microwave pulse to the feedline, populating the readout resonator. Due to a state-dependent shift of the resonator frequency, the outgoing signal is phase-shifted depending on whether the qubit is in the state |0⟩ or |1⟩. This leads to a change in the real and imaginary components of the outgoing signal, which is experimentally measured. This two-dimensional output can be transformed into a single continuous real variable and converted to a binary outcome by applying some threshold calibrated using a separate experiment <cit.>. While binary variables are convenient to work with and store, continuous measurement outcomes hold much more information about the state of the qubit, referred to as soft information. It has been demonstrated that an MWPM-based decoder which considers the soft information of the individual measurements when decoding, offers higher thresholds and lower logical error rates than a hard decoder, which only considers the binary outcomes <cit.>. To demonstrate the flexibility of machine-learning decoders, we consider providing the soft information available from readout when training and evaluating the NN decoder. In our simulations, measurements project the qubit into either |0⟩ or |1⟩. A measurement outcome m_r,q = i of qubit q at round r corresponds to the ancilla qubit being in |i⟩ directly after the measurement. Given m_r,q = i, we model the soft outcome m̃_r, q∈ℝ to follow a Gaussian distribution 𝒩_i with mean μ_i and standard deviation σ. The soft outcome m̃_r,q can then be converted to a binary outcome m̅_r, q by introducing a threshold t, such that m̅_r,a = 0 if m̃_r, a≤ t, 1 otherwise. For the symmetric Gaussian distributions that we consider, this process leads to an assignment error probability P(m̅_r,q = 0 | m_r,q = 1) = P(m̅_r,q = 1 | m_r,q = 0) = p_m. This assignment error is added to the errors considered in our circuit-level noise models, specifically the X error before each measurement that happens with a probability p. The assignment error probability can related to the signal-to-noise ratio SNR=| μ_0 - μ_1 | / 2σ as p_m = 1/2erfc( SNR/√(2)). We fix μ_0 = -1 and μ_1 = 1 such that a given probability p_m fixes the standard deviation σ of the two distributions. The most straightforward approach to incorporating the soft information into the NN decoder is to directly provide the soft measurement outcomes m̃_r,q as input during training and evaluation. However, we find that doing this leads to an overall poor logical performance. Instead, we estimate the probability of a defect P(d_r,a = 1 |m̃_r,a, m̃_r-1,a), given the soft measurement outcomes of an ancilla qubit a in consecutive QEC rounds. Given a soft outcome m̃_r,q, the probability of the measured qubit `having being in the state' |i⟩ can be expressed as P(i |m̃_r,q) = P(m̃_r,q| i)P(i)/∑_j ∈{1, 2} P(m̃_r,q| j) P(j). The soft outcomes follow a Gaussian distribution, that is, P(m̃_r,q| i) = 𝒩_i(m̃_r,q). Finally, we make the simplifying assumption that the prior state probabilities P(i) = P(j) = 1/2, such that P(i |m̃_r,q) = 𝒩_i(m̃_r,q)/∑_j ∈{1, 2}𝒩_j(m̃_r,q). The probability of observing a defect can then be expressed as P(d_r,a = 1 |m̃_r,a, m̃_r-1,a) = 1 - ∑_i ∈{0,1}P(i |m̃_r,a)P(i |m̃_r-1,a). The expression for the defect probability inferred from using the soft (final) data qubit measurement outcomes can be derived similarly. To explore the performance of the soft NN decoder, we simulate the d=3 surface-code memory experiment using a circuit-level noise model with an error rate per operation of p=0.1%. We consider two separate assignment error probabilities p_m^a and p_m^d for ancilla qubit and data qubit measurements. We motivate this choice by the fact that data qubits remain idling while the ancilla qubits are being measured. A shorter measurement time can reduce the decoherence experienced by the data qubits but will typically lead to a higher p_m^a. The data qubit measurements at the end of the experiment, on the other hand, can be optimized to minimize p_m^d. Therefore, we focus on how a soft decoder can help with decoding when p_m^a is higher, similar to the discussion in <cit.>. We train the NN decoder using datasets of r = 1, 5, … , 37 QEC rounds, sampling 5×10^5 shots for each round and initial logical state. When evaluating the performance, we use simulate r = 10, 30, … , 150 QEC rounds, sampling 5×10^4 shots instead. The results for p_m^a=1% are shown in <ref>fig:soft_performancea. The hard NN decoder achieves an approximately 20% lower logical error rate compared to an MWPM decoder, consistent with the results shown in <ref>fig:y_bias_performance. In comparison, the soft NN decoder leads to an approximately 30% lower logical error rate instead, demonstrating the ability of the decoder to adapt to the provided soft information. In <ref>fig:soft_performanceb the logical error rate ε_L of the three decoders is shown for p_m^a∈{0, 0.1%, 1%, 10%}, where both NN decoders are trained at the corresponding p_m^a. For low p_m^a, the performance of the soft NN decoder is essentially equivalent to the hard NN decoder, with a moderate reduction in ε_L achieved for p_m^a≥ 1%. It is possible that the probability of defects is not the optimal way to provide the soft information to the decoder. One downside of this representation is that for a high assignment error probability p_m^a≥ 20%, the probability of observing a defect is close to 50%, which also impacts the training and leads the soft NN decoder to exhibit a higher logical error rate compared to the hard one (not shown in <ref>). Optimizing the performance of the soft NN decoder and comparing it to alternative approaches, namely the soft MWPM decoder proposed in <cit.>, remains an open question. § DISCUSSION We now discuss in more detail the performance of the NN decoder on the experimental data. Unfortunately, we only use simulated data to train the NN decoder throughout this work. These simulations use approximate Pauli-noise models that account for the most significant error mechanisms in the experiment, such as decoherence and readout errors. However, they do not include several important error sources present in the actual experiments, such as leakage, crosstalk, and stray interactions. The exclusion of these error mechanisms leads to the Pauli-noise models underpredicting the logical error rate compared to the rates observed in the experiment, as observed in <ref>. Furthermore, it was shown that the d=5 code is more sensitive to errors like leakage and crosstalk, which can lead to a more significant deviation relative to simulations of the d=3 codes <cit.>. Despite using these approximate models for training, when evaluating the NN decoder on experimental data, we observe that it outperforms MWPM and can achieve logical error rates comparable to those obtained using maximum-likelihood decoding, which is approximated by the TN decoder. The TN decoder requires information about the error probabilities, what defects they lead to, and their corresponding corrections, which can be encoded into a hypergraph, where the nodes correspond to defects and the hyperedges represent errors. Importantly, this hypergraph also does not explicitly include hyperedges corresponding to non-conventional errors, such as leakage or crosstalk. We expect that training on experimental data and optimizing the hyper-parameters of the network will enable it to match the performance of the TN decoder closely and potentially exceed it by learning about errors not included in the hypergraph. Despite the large volume of training data required to achieve good performance, we don't expect that generating sufficient experimental data for training will be an issue. Assuming that the QEC round duration is 1 and that it takes 200 to reset all qubits between subsequent runs, we estimate that it would take approximately three minutes to generate the datasets with 10^7 shots running r=1, 5, …, 37 rounds of QEC that were used for training the d=3,5,7 surface codes, see <ref>. The soft NN decoder used in this work achieves only a moderate performance increase. A direct comparison of this decoder with the soft MWPM decoder <cit.> will be useful to put this performance into perspective. It is possible that using the defect probabilities as the decoder input is not an optimal choice. An alternative approach to incorporating the soft information into the decoder is to estimate the likelihood of an assignment error L_r, a = 𝒩_ i(m̃_r, a) / 𝒩_i(m̃_r, a) given a soft outcome m̃_r, a that leads to a hard outcome of m̅_r, a = i, which is used by the soft MWPM decoder proposed in <cit.>. The likelihoods L_r, a can then be provided as input to the NN decoder together with the binary defects d_r, a that were measured. In addition to the representation of the input data, it is an open question whether using a soft NN decoder will be useful in practice, where assignment error rates are typically low. Specifically, it would be interesting to see if using a soft NN decoder will enable using a shorter measurement time that might lead to a higher assignment error rate but maximize the logical performance overall, as discussed in <cit.>. The symmetric Gaussian distributions of the continuous measurement outcomes we consider here are only very simple approximations of the distributions seen in experiments, and in our modeling we could adapt these. In particular, the relaxation that the qubit experiences during the readout leads to an asymmetry between the distributions and a generally higher probability of an assignment error when the qubit was prepared in |1⟩. Furthermore, the continuous outcomes observed in the experiment can also contain information about leakage <cit.> or correlations with other measurements. Therefore, it will be essential to investigate and optimize the performance of the soft decoders using experimental data. Finally, we outline some possible directions for future research necessary to use these decoders for decoding large-distance experiments. Decoders based on feedforward and convolutional architectures have been shown to achieve low-latency decoding, making them a possible candidate for being used in real time <cit.>. On the other hand, recurrent networks generally have a larger number of parameters and carry out more complex operations when processing the data. However, recurrent NN decoders have been shown to achieve higher accuracy and be more easily trainable than other architectures, especially when considering realistic noise models <cit.>. Therefore, whether hardware implementations of recurrent NN decoders can be used for real-time decoding is an open question. In addition to the latency, the scalability of NN decoders is an open question. Decoding higher-distance codes will require larger neural networks and larger training datasets, which will most likely be more challenging to train, given that approaches based on machine learning generally struggle when the dimension of the input becomes very large. Practically, one might be interested in whether the NN decoder can be trained and used to decode some finite code distance, which is expected to lead to algorithmically-relevant logical error rates given the processor's performance. Alternatively, there exist approaches that enable scalable NN decoders. These are typically based on convolutional neural networks that learn to infer and correct the physical errors that have occurred while a secondary global decoder handles any possibly remaining errors <cit.>, but a purely convolutional NN method has been explored as well <cit.>. The recurrent NN decoder used in this work is not scalable, and adapting it to work with larger code distances and using it to decode through logical operations is another open research venue. Lastly, while preparing this manuscript, we became aware of a similar work <cit.> that explores the performance of a graph neural network decoder on data from the repetition code experiment that was also done in <cit.>. § ACKNOWLEDGMENTS We are grateful to Earl Campbell for insightful discussions and for comments on the manuscript. We also thank Laura Caune for implementing the belief-matching decoder that we have used in this work. B. M. V. and B. M. T. are supported by QuTech NWO funding 2020-2024 – Part I “Fundamental Research” with project number 601.QT.001-1. § DATA AND SOFTWARE AVAILABILITY The data and software that support the plots presented in this figure are available at <cit.>. The raw simulated data and the scripts used for training and decoding this data are available upon reasonable request. § APPENDIX §.§ Quantum memory experiments To characterize the logical performance of a surface code, we look at its ability to maintain an initial logical state as a function of the number of QEC rounds, commonly referred to as a quantum memory experiment. The circuits used to perform these experiments are illustrated in <ref> and follow the ones used in the recent d=5 surface code experiment done by Google Quantum AI <cit.>. Removing some of the Hadamard gates when compiling the stabilizer measurement circuits leads to each ancilla qubit measuring the ZXXZ operator instead of the standard XXXX and ZZZZ stabilizers of the surface code. Implementing this ZXXZ variant of the surface code symmetrizes the logical error rates between experiments done in the logical X-basis or Z-basis <cit.>. Despite this modification, we use notations associated with the traditional stabilizers measured by the surface code. Each experiment begins by preparing a given logical state, performed by the circuits in <ref> a-d. The data qubits are first initialized in the ground state and then prepared in either |0⟩ or |1⟩ by a layer of conditional X gates. A subset of the data qubits is then rotated and transforms the initial state into an eigenstate of the X- or Z-type stabilizers. The parity of the initial bistring state determines whether |0⟩_L or |1⟩_L (|+⟩_L or |-⟩_L) is prepared if the experiment is done in the Z-basis (X-basis). In simulation, we prepare either |0⟩^⊗ n or |1⟩^⊗ n when using uniform circuit-level noise models. In the experiment, several random bitstring states are used in order to symmetrize the impact of amplitude damping <cit.>. The prepared logical state is then maintained over a total of r ∈{1, 2, …, N-1 } QEC rounds, with the circuit given by <ref> e-o. The first QEC round then projects this initial state into a simultaneous eigenstate of both the X- or Z-type stabilizers. Each cycle involves a series of four interactions between each ancilla qubit and its neighboring data qubits, which map the X or Z parity onto the state of the ancilla qubit. The order in which these two-qubit operations are executed is carefully chosen to minimize the impact of errors occurring during the execution of the circuit <cit.>. At the end of each QEC round, all of the ancilla qubits are measured and reset. The stabilizer measurement circuits also contain several X gates on either the data or ancilla qubits, which dynamically decouple the qubits in the experiment <cit.>. Naturally, these gates do not improve the logical performance for the simulations using approximate Pauli-error models that we consider here. In the final QEC round, the data qubits rotated during the state preparation are rotated back and measured in the Z-basis together with the ancilla qubits, illustrated in <ref> p-r. The data qubit measurement outcomes are then used to calculate the value of the X_L or Z_L logical observable as well as to infer a final set of X- or Z-type stabilizer measurement outcomes. §.§ Decoder training and evaluation Here we provide additional details about how we train the NN decoder and the hyper-parameters we use. We use the Adam optimizer typically with a learning rate of 10^-3 or 5 × 10^-4 for training. In addition, we apply dropout after the hidden layer of the feed-forward network of each head and, in some cases, after the second LSTM layer with a dropout rate of either 20% or 5% to avoid over-fitting and assist with the generalization of the network. We use a batch size of 256 or 64, which we found to lead to a smoother minimization of the loss. After each training epoch, we evaluate the loss of the network on a separate dataset that considers the same number of QEC rounds and prepared states as the training dataset but samples fewer shots for each experiment. After each epoch, we save the networks' weights if a lower loss has been achieved. Furthermore, we use early stopping to end the training if the loss has not decreased over the last 20 epochs to reduce the time it takes to train each model. We have observed that not using early-stopping and leaving the training to continue does not typically lead the network to reach a lower loss eventually. For some datasets, we lower the learning rate after the initial training has stopped early and train the network once more to achieve better performance. The hyper-parameters we have used for training each network and the parameters of the training datasets used are presented in <ref>tab:hyperparameters. The NN architecture we employ in this work uses two stacked LSTM layers to process the recurrent input <cit.>. We observe poor logical performance for a d=3 surface code when using only a single LSTM layer. On the other hand, we see no significant improvement in the logical error rate when using four layers instead, motivating the choice to use only two. This network architecture also performs well when decoding d=5 and d=7 surface code experiments. However, we expect that a deeper recurrent network might improve the logical error rates when decoding larger-distance codes or when training on and decoding experimental data. We have also practically observed that training the NN decoder for larger distances is more challenging, especially if the physical error rates are small. Training the neural network on a dataset with a higher physical error rate (in addition to data using the same error rate as the evaluation dataset) can also improve the performance of the decoder, as we also discussed in <ref>. The training of our neural networks was performed on the DelftBlue supercomputer <cit.> and was carried out on an NVIDIA Tesla V100S GPU. Once trained, the decoder takes approximately 0.7 seconds per QEC round for a d=3 surface code (corresponding to an internal state size of N_L = 64) using a batch size of 50000 shots on an Intel(R) Core(TM) i7-8850H CPU @ 2.60GHz. For a d=5 surface code (N_L = 96), it takes about 0.8 seconds per round, while for a d=7 surface code (N_L = 128), it takes about 1.1 seconds per round, using the same batch size of 50000 shots. We note that using smaller batch sizes leads to a higher overall runtime due to parallelism when the network processes the inputs. Therefore, larger batch sizes are preferable as long as they fit into the memory. Each runtime was extracted by decoding simulated datasets running r=10, 30, …, 290 rounds of QEC and averaging the runtime per QEC round over all the datasets. 97 fxundefined [1] ifx#1 fnum [1] #1firstoftwo secondoftwo fx [1] #1firstoftwo secondoftwo noop [0]secondoftwo ref[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0] rl [1]href #1 @bib@innerbibempty [Shor(1997)]Shor97 author author P. W. Shor, https://doi.org/10.1137/S0097539795293172 journal journal SIAM Journal on Computing volume 26, pages 1484 (year 1997)NoStop [Lloyd(1996)]Lloyd96 author author S. Lloyd, http://science.sciencemag.org/content/273/5278/1073 journal journal Science volume 273, pages 1073 (year 1996)NoStop [Reiher et al.(2017)Reiher, Wiebe, Svore, Wecker, and Troyer]Reiher17 author author M. Reiher, author N. Wiebe, author K. M. Svore, author D. Wecker, and author M. Troyer, https://doi.org/10.1073/pnas.1619152114 journal journal Proceedings of the National Academy of Sciences volume 114, pages 7555 (year 2017), https://arxiv.org/abs/https://www.pnas.org/content/114/29/7555.full.pdf https://www.pnas.org/content/114/29/7555.full.pdf NoStop [Gidney and Ekerå(2019)]Gidney19 author author C. Gidney and author M. Ekerå, https://arxiv.org/abs/1905.09749 journal journal ArXiv:1905.09749 (year 2019), https://arxiv.org/abs/1905.09749 1905.09749 NoStop [Barends et al.(2014)Barends, Kelly, Megrant, Veitia, Sank, Jeffrey, White, Mutus, Fowler, Campbell, Chen, Chen, Chiaro, Dunsworth, Neill, O'Malley, Roushan, Vainsencher, Wenner, Korotkov, Cleland, and Martinis]Barends14 author author R. Barends, author J. Kelly, author A. Megrant, author A. Veitia, author D. Sank, author E. Jeffrey, author T. C. White, author J. Mutus, author A. G. Fowler, author B. Campbell, author Y. Chen, author Z. Chen, author B. Chiaro, author A. Dunsworth, author C. Neill, author P. O'Malley, author P. Roushan, author A. Vainsencher, author J. Wenner, author A. N. Korotkov, author A. N. Cleland, and author J. M. Martinis, http://www.nature.com/nature/journal/v508/n7497/abs/nature13171.html journal journal Nature volume 508, pages 500 (year 2014)NoStop [Rol et al.(2017)Rol, Bultink, O'Brien, de Jong, Theis, Fu, Luthi, Vermeulen, de Sterke, Bruno, Deurloo, Schouten, Wilhelm, and DiCarlo]Rol16 author author M. A. Rol, author C. C. Bultink, author T. E. O'Brien, author S. R. de Jong, author L. S. Theis, author X. Fu, author F. Luthi, author R. F. L. Vermeulen, author J. C. de Sterke, author A. Bruno, author D. Deurloo, author R. N. Schouten, author F. K. Wilhelm, and author L. DiCarlo, https://doi.org/10.1103/PhysRevApplied.7.041001 journal journal Phys. Rev. Applied volume 7, pages 041001 (year 2017)NoStop [Barends et al.(2019)Barends, Quintana, Petukhov, Chen, Kafri, Kechedzhi, Collins, Naaman, Boixo, Arute, Arya, Buell, Burkett, Chen, Chiaro, Dunsworth, Foxen, Fowler, Gidney, Giustina, Graff, Huang, Jeffrey, Kelly, Klimov, Kostritsa, Landhuis, Lucero, McEwen, Megrant, Mi, Mutus, Neeley, Neill, Ostby, Roushan, Sank, Satzinger, Vainsencher, White, Yao, Yeh, Zalcman, Neven, Smelyanskiy, and Martinis]Barends19 author author R. Barends, author C. M. Quintana, author A. G. Petukhov, author Y. Chen, author D. Kafri, author K. Kechedzhi, author R. Collins, author O. Naaman, author S. Boixo, author F. Arute, author K. Arya, author D. Buell, author B. Burkett, author Z. Chen, author B. Chiaro, author A. Dunsworth, author B. Foxen, author A. Fowler, author C. Gidney, author M. Giustina, author R. Graff, author T. Huang, author E. Jeffrey, author J. Kelly, author P. V. Klimov, author F. Kostritsa, author D. Landhuis, author E. Lucero, author M. McEwen, author A. Megrant, author X. Mi, author J. Mutus, author M. Neeley, author C. Neill, author E. Ostby, author P. Roushan, author D. Sank, author K. J. Satzinger, author A. Vainsencher, author T. White, author J. Yao, author P. Yeh, author A. Zalcman, author H. Neven, author V. N. Smelyanskiy, and author J. M. Martinis, https://link.aps.org/doi/10.1103/PhysRevLett.123.210501 journal journal Phys. Rev. Lett. volume 123, pages 210501 (year 2019)NoStop [Rol et al.(2019)Rol, Battistel, Malinowski, Bultink, Tarasinski, Vollmer, Haider, Muthusubramanian, Bruno, Terhal, and DiCarlo]Rol19 author author M. A. Rol, author F. Battistel, author F. K. Malinowski, author C. C. Bultink, author B. M. Tarasinski, author R. Vollmer, author N. Haider, author N. Muthusubramanian, author A. Bruno, author B. M. Terhal, and author L. DiCarlo, https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.123.120502 journal journal Phys. Rev. Lett. volume 123, pages 120502 (year 2019)NoStop [Negîrneac et al.(2021)Negîrneac, Ali, Muthusubramanian, Battistel, Sagastizabal, Moreira, Marques, Vlothuizen, Beekman, Zachariadis, Haider, Bruno, and DiCarlo]Negirneac20 author author V. Negîrneac, author H. Ali, author N. Muthusubramanian, author F. Battistel, author R. Sagastizabal, author M. S. Moreira, author J. F. Marques, author W. J. Vlothuizen, author M. Beekman, author C. Zachariadis, author N. Haider, author A. Bruno, and author L. DiCarlo, https://doi.org/10.1103/PhysRevLett.126.220502 journal journal Phys. Rev. Lett. volume 126, pages 220502 (year 2021)NoStop [Foxen et al.(2020)Foxen, Neill, Dunsworth, Roushan, Chiaro, Megrant, Kelly, Chen, Satzinger, Barends, Arute, Arya, Babbush, Bacon, Bardin, Boixo, Buell, Burkett, Chen, Collins, Farhi, Fowler, Gidney, Giustina, Graff, Harrigan, Huang, Isakov, Jeffrey, Jiang, Kafri, Kechedzhi, Klimov, Korotkov, Kostritsa, Landhuis, Lucero, McClean, McEwen, Mi, Mohseni, Mutus, Naaman, Neeley, Niu, Petukhov, Quintana, Rubin, Sank, Smelyanskiy, Vainsencher, White, Yao, Yeh, Zalcman, Neven, and Martinis]Foxen20 author author B. Foxen, author C. Neill, author A. Dunsworth, author P. Roushan, author B. Chiaro, author A. Megrant, author J. Kelly, author Z. Chen, author K. Satzinger, author R. Barends, author F. Arute, author K. Arya, author R. Babbush, author D. Bacon, author J. C. Bardin, author S. Boixo, author D. Buell, author B. Burkett, author Y. Chen, author R. Collins, author E. Farhi, author A. Fowler, author C. Gidney, author M. Giustina, author R. Graff, author M. Harrigan, author T. Huang, author S. V. Isakov, author E. Jeffrey, author Z. Jiang, author D. Kafri, author K. Kechedzhi, author P. Klimov, author A. Korotkov, author F. Kostritsa, author D. Landhuis, author E. Lucero, author J. McClean, author M. McEwen, author X. Mi, author M. Mohseni, author J. Y. Mutus, author O. Naaman, author M. Neeley, author M. Niu, author A. Petukhov, author C. Quintana, author N. Rubin, author D. Sank, author V. Smelyanskiy, author A. Vainsencher, author T. C. White, author Z. Yao, author P. Yeh, author A. Zalcman, author H. Neven, and author J. M. Martinis, https://arxiv.org/abs/2001.08343 journal journal ArXiv:2001.08343 (year 2020), https://arxiv.org/abs/2001.08343 2001.08343 NoStop [Jurcevic et al.(2021)Jurcevic, Javadi-Abhari, Bishop, Lauer, Bogorin, Brink et al.]jurcevic21 author author P. Jurcevic, author A. Javadi-Abhari, author L. Bishop, author I. Lauer, author D. Bogorin, author M. Brink, et al., @noop journal journal Quantum Sci. Technol. volume 6, pages 025020 (year 2021)NoStop [Harty et al.(2014)Harty, Allcock, Ballance, Guidoni, Janacek, Linke, Stacey, and Lucas]Harty14 author author T. P. Harty, author D. T. C. Allcock, author C. J. Ballance, author L. Guidoni, author H. A. Janacek, author N. M. Linke, author D. N. Stacey, and author D. M. Lucas, https://doi.org/10.1103/PhysRevLett.113.220501 journal journal Phys. Rev. Lett. volume 113, pages 220501 (year 2014)NoStop [Hong et al.(2020)Hong, Papageorge, Sivarajah, Crossman, Didier, Polloreno, Sete, Turkowski, da Silva, and Johnson]Hong19 author author S. S. Hong, author A. T. Papageorge, author P. Sivarajah, author G. Crossman, author N. Didier, author A. M. Polloreno, author E. A. Sete, author S. W. Turkowski, author M. P. da Silva, and author B. R. Johnson, https://link.aps.org/doi/10.1103/PhysRevA.101.012302 journal journal Phys. Rev. A volume 101, pages 012302 (year 2020)NoStop [Huang et al.(2019)Huang, Yang, Chan, Tanttu, Hensen, Leon, Fogarty, Hwang, Hudson, Itoh, Morello, Laucht, and Dzurak]Huang19 author author W. Huang, author C. H. Yang, author K. W. Chan, author T. Tanttu, author B. Hensen, author R. C. C. Leon, author M. A. Fogarty, author J. C. C. Hwang, author F. E. Hudson, author K. M. Itoh, author A. Morello, author A. Laucht, and author A. S. Dzurak, https://doi.org/10.1038/s41586-019-1197-0 journal journal Nature volume 569, pages 532 (year 2019)NoStop [Shor(1995)]Shor95 author author P. W. Shor, https://doi.org/10.1103/PhysRevA.52.R2493 journal journal Phys. Rev. A volume 52, pages R2493 (year 1995)NoStop [Laflamme et al.(1998)Laflamme, Knill, Zurek, Catasti, and Mariappan]Laflamme98 author author R. Laflamme, author E. Knill, author W. H. Zurek, author P. Catasti, and author S. V. S. Mariappan, @noop journal journal Phil. Trans. R. Soc. Lond. A volume 356, pages 1941 (year 1998)NoStop [Gottesman(2014)]Gottesman14 author author D. Gottesman, @noop journal journal QIC volume 14, pages 1338 (year 2014)NoStop [Gottesman(1997)]GottesmanPhD author author D. Gottesman, title Stabilizer Codes and Quantum Error Correction, @noop type PhD Dissertation, school Caltech (year 1997)NoStop [Kitaev(2003)]Kitaev03 author author A. Y. Kitaev, @noop journal journal Annals of Physics volume 303, pages 2 (year 2003)NoStop [Dennis et al.(2002)Dennis, Kitaev, Landahl, and Preskill]Dennis02 author author E. Dennis, author A. Kitaev, author A. Landahl, and author J. Preskill, https://doi.org/10.1063/1.1499754 journal journal Journal of Mathematical Physics volume 43 (year 2002)NoStop [Fowler et al.(2012a)Fowler, Mariantoni, Martinis, and Cleland]Fowler12 author author A. G. Fowler, author M. Mariantoni, author J. M. Martinis, and author A. N. Cleland, https://link.aps.org/doi/10.1103/PhysRevA.86.032324 journal journal Phys. Rev. A volume 86, pages 032324 (year 2012a)NoStop [Raussendorf and Harrington(2007)]Raussendorf07 author author R. Raussendorf and author J. Harrington, https://doi.org/10.1103/PhysRevLett.98.190504 journal journal Phys. Rev. Lett. volume 98, pages 190504 (year 2007)NoStop [Corcoles et al.(2019)Corcoles, Kandala, Javadi-Abhari, McClure, Cross, Temme, Nation, Steffen, and Gambetta]Corcoles19 author author A. D. Corcoles, author A. Kandala, author A. Javadi-Abhari, author D. T. McClure, author A. W. Cross, author K. Temme, author P. D. Nation, author M. Steffen, and author J. M. Gambetta, https://arxiv.org/abs/1910.02894 journal journal ArXiv:1910.02894 (year 2019)NoStop [Arute et al.(2019)Arute, Arya, Babbush, Bacon, Bardin, Barends et al.]Arute19 author author F. Arute, author K. Arya, author R. Babbush, author D. Bacon, author J. Bardin, author R. Barends, et al., https://doi.org/10.1038/s41586-019-1666-5 journal journal Nature volume 574, pages 505 (year 2019)NoStop [Acharya et al.(2023)Acharya, Aleiner, Allen, Andersen, Ansmann, Arute, Arya, Asfaw, Atalaya, Babbush, Bacon, Bardin, Basso, Bengtsson, Boixo, Bortoli, Bourassa, Bovaird, Brill, Broughton, Buckley, Buell, Burger, Burkett, Bushnell, Chen, Chen, Chiaro, Cogan, Collins, Conner, Courtney, Crook, Curtin, Debroy, Del Toro Barba, Demura, Dunsworth, Eppens, Erickson, Faoro, Farhi, Fatemi, Flores Burgos, Forati, Fowler, Foxen, Giang, Gidney, Gilboa, Giustina, Grajales Dau, Gross, Habegger, Hamilton, Harrigan, Harrington, Higgott, Hilton, Hoffmann, Hong, Huang, Huff, Huggins, Ioffe, Isakov, Iveland, Jeffrey, Jiang, Jones, Juhas, Kafri, Kechedzhi, Kelly, Khattar, Khezri, Kieferová, Kim, Kitaev, Klimov, Klots, Korotkov, Kostritsa, Kreikebaum, Landhuis, Laptev, Lau, Laws, Lee, Lee, Lester, Lill, Liu, Locharla, Lucero, Malone, Marshall, Martin, McClean, McCourt, McEwen, Megrant, Meurer Costa, Mi, Miao, Mohseni, Montazeri, Morvan, Mount, Mruczkiewicz, Naaman, Neeley, Neill, Nersisyan, Neven, Newman, Ng, Nguyen, Nguyen, Niu, O'Brien, Opremcak, Platt, Petukhov, Potter, Pryadko, Quintana, Roushan, Rubin, Saei, Sank, Sankaragomathi, Satzinger, Schurkus, Schuster, Shearn, Shorter, Shvarts, Skruzny, Smelyanskiy, Smith, Sterling, Strain, Szalay, Torres, Vidal, Villalonga, Vollgraff Heidweiller, White, Xing, Yao, Yeh, Yoo, Young, Zalcman, Zhang, Zhu, and AI]Acharya23 author author R. Acharya, author I. Aleiner, author R. Allen, author T. I. Andersen, author M. Ansmann, author F. Arute, author K. Arya, author A. Asfaw, author J. Atalaya, author R. Babbush, author D. Bacon, author J. C. Bardin, author J. Basso, author A. Bengtsson, author S. Boixo, author G. Bortoli, author A. Bourassa, author J. Bovaird, author L. Brill, author M. Broughton, author B. B. Buckley, author D. A. Buell, author T. Burger, author B. Burkett, author N. Bushnell, author Y. Chen, author Z. Chen, author B. Chiaro, author J. Cogan, author R. Collins, author P. Conner, author W. Courtney, author A. L. Crook, author B. Curtin, author D. M. Debroy, author A. Del Toro Barba, author S. Demura, author A. Dunsworth, author D. Eppens, author C. Erickson, author L. Faoro, author E. Farhi, author R. Fatemi, author L. Flores Burgos, author E. Forati, author A. G. Fowler, author B. Foxen, author W. Giang, author C. Gidney, author D. Gilboa, author M. Giustina, author A. Grajales Dau, author J. A. Gross, author S. Habegger, author M. C. Hamilton, author M. P. Harrigan, author S. D. Harrington, author O. Higgott, author J. Hilton, author M. Hoffmann, author S. Hong, author T. Huang, author A. Huff, author W. J. Huggins, author L. B. Ioffe, author S. V. Isakov, author J. Iveland, author E. Jeffrey, author Z. Jiang, author C. Jones, author P. Juhas, author D. Kafri, author K. Kechedzhi, author J. Kelly, author T. Khattar, author M. Khezri, author M. Kieferová, author S. Kim, author A. Kitaev, author P. V. Klimov, author A. R. Klots, author A. N. Korotkov, author F. Kostritsa, author J. M. Kreikebaum, author D. Landhuis, author P. Laptev, author K.-M. Lau, author L. Laws, author J. Lee, author K. Lee, author B. J. Lester, author A. Lill, author W. Liu, author A. Locharla, author E. Lucero, author F. D. Malone, author J. Marshall, author O. Martin, author J. R. McClean, author T. McCourt, author M. McEwen, author A. Megrant, author B. Meurer Costa, author X. Mi, author K. C. Miao, author M. Mohseni, author S. Montazeri, author A. Morvan, author E. Mount, author W. Mruczkiewicz, author O. Naaman, author M. Neeley, author C. Neill, author A. Nersisyan, author H. Neven, author M. Newman, author J. H. Ng, author A. Nguyen, author M. Nguyen, author M. Y. Niu, author T. E. O'Brien, author A. Opremcak, author J. Platt, author A. Petukhov, author R. Potter, author L. P. Pryadko, author C. Quintana, author P. Roushan, author N. C. Rubin, author N. Saei, author D. Sank, author K. Sankaragomathi, author K. J. Satzinger, author H. F. Schurkus, author C. Schuster, author M. J. Shearn, author A. Shorter, author V. Shvarts, author J. Skruzny, author V. Smelyanskiy, author W. C. Smith, author G. Sterling, author D. Strain, author M. Szalay, author A. Torres, author G. Vidal, author B. Villalonga, author C. Vollgraff Heidweiller, author T. White, author C. Xing, author Z. J. Yao, author P. Yeh, author J. Yoo, author G. Young, author A. Zalcman, author Y. Zhang, author N. Zhu, and author G. Q. AI, https://doi.org/10.1038/s41586-022-05434-1 journal journal Nature volume 614, pages 676 (year 2023)NoStop [Sundaresan et al.(2023)Sundaresan, Yoder, Kim, Li, Chen, Harper, Thorbeck, Cross, Córcoles, and Takita]Sundaresan23 author author N. Sundaresan, author T. J. Yoder, author Y. Kim, author M. Li, author E. H. Chen, author G. Harper, author T. Thorbeck, author A. W. Cross, author A. D. Córcoles, and author M. Takita, https://doi.org/10.1038/s41467-023-38247-5 journal journal Nature Communications volume 14, pages 2852 (year 2023)NoStop [Heinsoo et al.(2018)Heinsoo, Andersen, Remm, Krinner, Walter, Salathé, Gasparinetti, Besse, Poto ččnik, Wallraff, and Eichler]Heinsoo18 author author J. Heinsoo, author C. K. Andersen, author A. Remm, author S. Krinner, author T. Walter, author Y. Salathé, author S. Gasparinetti, author J.-C. Besse, author A. Poto ččnik, author A. Wallraff, and author C. Eichler, https://doi.org/10.1103/PhysRevApplied.10.034040 journal journal Phys. Rev. App. volume 10, pages 034040 (year 2018)NoStop [Marques et al.(2023)Marques, Ali, Varbanov, Finkel, Veen, Valles-Sanclemente, Muthusubramanian, Beekman, Haider, Terhal, and DiCarlo]Marques23 author author J. F. Marques, author H. Ali, author B. M. Varbanov, author M. Finkel, author S. L. M. Veen, H. M. van der Meer, author S. Valles-Sanclemente, author N. Muthusubramanian, author M. Beekman, author N. Haider, author B. M. Terhal, and author L. DiCarlo, https://arxiv.org/abs/2302.09876 journal journal ArXiv:2302.09876 (year 2023)NoStop [McEwen et al.(2021)McEwen, Kafri, Chen, Atalaya, Satzinger, Quintana, Klimov, Sank, Gidney, Fowler, Arute, Arya, Buckley, Burkett, Bushnell, Chiaro, Collins, Demura, Dunsworth, Erickson, Foxen, Giustina, Huang, Hong, Jeffrey, Kim, Kechedzhi, Kostritsa, Laptev, Megrant, Mi, Mutus, Naaman, Neeley, Neill, Niu, Paler, Redd, Roushan, White, Yao, Yeh, Zalcman, Chen, Smelyanskiy, Martinis, Neven, Kelly, Korotkov, Petukhov, and Barends]McEwen2021 author author M. McEwen, author D. Kafri, author Z. Chen, author J. Atalaya, author K. J. Satzinger, author C. Quintana, author P. V. Klimov, author D. Sank, author C. Gidney, author A. G. Fowler, author F. Arute, author K. Arya, author B. Buckley, author B. Burkett, author N. Bushnell, author B. Chiaro, author R. Collins, author S. Demura, author A. Dunsworth, author C. Erickson, author B. Foxen, author M. Giustina, author T. Huang, author S. Hong, author E. Jeffrey, author S. Kim, author K. Kechedzhi, author F. Kostritsa, author P. Laptev, author A. Megrant, author X. Mi, author J. Mutus, author O. Naaman, author M. Neeley, author C. Neill, author M. Niu, author A. Paler, author N. Redd, author P. Roushan, author T. C. White, author J. Yao, author P. Yeh, author A. Zalcman, author Y. Chen, author V. N. Smelyanskiy, author J. M. Martinis, author H. Neven, author J. Kelly, author A. N. Korotkov, author A. G. Petukhov, and author R. Barends, https://doi.org/10.1038/s41467-021-21982-y journal journal Nature Communications volume 12, pages 1761 (year 2021)NoStop [Miao et al.(2022)Miao, McEwen, Atalaya, Kafri, Pryadko, Bengtsson, Opremcak, Satzinger, Chen, Klimov, Quintana, Acharya, Anderson, Ansmann, Arute, Arya, Asfaw, Bardin, Bourassa, Bovaird, Brill, Buckley, Buell, Burger, Burkett, Bushnell, Campero, Chiaro, Collins, Conner, Crook, Curtin, Debroy, Demura, Dunsworth, Erickson, Fatemi, Ferreira, Burgos, Forati, Fowler, Foxen, Garcia, Giang, Gidney, Giustina, Gosula, Dau, Gross, Hamilton, Harrington, Heu, Hilton, Hoffmann, Hong, Huang, Huff, Iveland, Jeffrey, Jiang, Jones, Kelly, Kim, Kostritsa, Kreikebaum, Landhuis, Laptev, Laws, Lee, Lester, Lill, Liu, Locharla, Lucero, Martin, Megrant, Mi, Montazeri, Morvan, Naaman, Neeley, Neill, Nersisyan, Newman, Ng, Nguyen, Nguyen, Potter, Rocque, Roushan, Sankaragomathi, Schuster, Shearn, Shorter, Shutty, Shvarts, Skruzny, Smith, Sterling, Szalay, Thor, Torres, White, Woo, Yao, Yeh, Yoo, Young, Zalcman, Zhu, Zobrist, Neven, Smelyanskiy, Petukhov, Korotkov, Sank, and Chen]Miao22 author author K. C. Miao, author M. McEwen, author J. Atalaya, author D. Kafri, author L. P. Pryadko, author A. Bengtsson, author A. Opremcak, author K. J. Satzinger, author Z. Chen, author P. V. Klimov, author C. Quintana, author R. Acharya, author K. Anderson, author M. Ansmann, author F. Arute, author K. Arya, author A. Asfaw, author J. C. Bardin, author A. Bourassa, author J. Bovaird, author L. Brill, author B. B. Buckley, author D. A. Buell, author T. Burger, author B. Burkett, author N. Bushnell, author J. Campero, author B. Chiaro, author R. Collins, author P. Conner, author A. L. Crook, author B. Curtin, author D. M. Debroy, author S. Demura, author A. Dunsworth, author C. Erickson, author R. Fatemi, author V. S. Ferreira, author L. F. Burgos, author E. Forati, author A. G. Fowler, author B. Foxen, author G. Garcia, author W. Giang, author C. Gidney, author M. Giustina, author R. Gosula, author A. G. Dau, author J. A. Gross, author M. C. Hamilton, author S. D. Harrington, author P. Heu, author J. Hilton, author M. R. Hoffmann, author S. Hong, author T. Huang, author A. Huff, author J. Iveland, author E. Jeffrey, author Z. Jiang, author C. Jones, author J. Kelly, author S. Kim, author F. Kostritsa, author J. M. Kreikebaum, author D. Landhuis, author P. Laptev, author L. Laws, author K. Lee, author B. J. Lester, author A. T. Lill, author W. Liu, author A. Locharla, author E. Lucero, author S. Martin, author A. Megrant, author X. Mi, author S. Montazeri, author A. Morvan, author O. Naaman, author M. Neeley, author C. Neill, author A. Nersisyan, author M. Newman, author J. H. Ng, author A. Nguyen, author M. Nguyen, author R. Potter, author C. Rocque, author P. Roushan, author K. Sankaragomathi, author C. Schuster, author M. J. Shearn, author A. Shorter, author N. Shutty, author V. Shvarts, author J. Skruzny, author W. C. Smith, author G. Sterling, author M. Szalay, author D. Thor, author A. Torres, author T. White, author B. W. K. Woo, author Z. J. Yao, author P. Yeh, author J. Yoo, author G. Young, author A. Zalcman, author N. Zhu, author N. Zobrist, author H. Neven, author V. Smelyanskiy, author A. Petukhov, author A. N. Korotkov, author D. Sank, and author Y. Chen, @noop title Overcoming leakage in scalable quantum error correction (year 2022), https://arxiv.org/abs/2211.04728 arXiv:2211.04728 [quant-ph] NoStop [Jeffrey et al.(2014)Jeffrey, Sank, Mutus, White, Kelly, Barends, Chen, Chen, Chiaro, Dunsworth, Megrant, O'Malley, Neill, Roushan, Vainsencher, Wenner, Cleland, and Martinis]Jeffrey14 author author E. Jeffrey, author D. Sank, author J. Y. Mutus, author T. C. White, author J. Kelly, author R. Barends, author Y. Chen, author Z. Chen, author B. Chiaro, author A. Dunsworth, author A. Megrant, author P. J. J. O'Malley, author C. Neill, author P. Roushan, author A. Vainsencher, author J. Wenner, author A. N. Cleland, and author J. M. Martinis, https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.112.190504 journal journal Phys. Rev. Lett. volume 112, pages 190504 (year 2014)NoStop [Bultink et al.(2016)Bultink, Rol, O'Brien, Fu, Dikken, Dickel, Vermeulen, de Sterke, Bruno, Schouten, and DiCarlo]Bultink16 author author C. C. Bultink, author M. A. Rol, author T. E. O'Brien, author X. Fu, author B. C. S. Dikken, author C. Dickel, author R. F. L. Vermeulen, author J. C. de Sterke, author A. Bruno, author R. N. Schouten, and author L. DiCarlo, https://link.aps.org/doi/10.1103/PhysRevApplied.6.034008 journal journal Phys. Rev. App. volume 6, pages 034008 (year 2016)NoStop [Kelly et al.(2015)Kelly, Barends, Fowler, Megrant, Jeffrey, White, Sank, Mutus, Campbell, Chen, Chiaro, Dunsworth, Hoi, Neill, O’Malley, Quintana, Roushan, Vainsencher, Cleland, and Martinis]Kelly15 author author J. Kelly, author R. Barends, author A. G. Fowler, author A. Megrant, author E. Jeffrey, author T. White, author D. Sank, author J. Mutus, author B. Campbell, author Y. Chen, author B. Chiaro, author A. Dunsworth, author I.-C. Hoi, author C. Neill, author P. J. J. O’Malley, author C. Quintana, author P. Roushan, author A. Vainsencher, author A. N. Cleland, J. Wenner, and author J. M. Martinis, https://www.nature.com/nature/journal/v519/n7541/full/nature14270.html journal journal Nature volume 519, pages 66 (year 2015)NoStop [Egan et al.(2020)Egan, Debroy, Noel, Risinger, Zhu, Biswas, Newman, Li, Brown, Cetina et al.]Egan20 author author L. Egan, author D. M. Debroy, author C. Noel, author A. Risinger, author D. Zhu, author D. Biswas, author M. Newman, author M. Li, author K. R. Brown, author M. Cetina, et al., @noop journal journal arXiv preprint arXiv:2009.11482 (year 2020)NoStop [Abobeih et al.(2022)Abobeih, Wang, Randall, Loenen, Bradley, Markham, Twitchen, Terhal, and Taminiau]Abobeih22 author author M. H. Abobeih, author Y. Wang, author J. Randall, author S. J. H. Loenen, author C. E. Bradley, author M. Markham, author D. J. Twitchen, author B. M. Terhal, and author T. H. Taminiau, https://doi.org/10.1038/s41586-022-04819-6 journal journal Nature volume 606, pages 884 (year 2022)NoStop [Ryan-Anderson et al.(2021)Ryan-Anderson, Bohnet, Lee, Gresh, Hankin, Gaebler, Francois, Chernoguzov, Lucchetti, Brown, Gatterman, Halit, Gilmore, Gerber, Neyenhuis, Hayes, and Stutz]RyanAnderson21 author author C. Ryan-Anderson, author J. G. Bohnet, author K. Lee, author D. Gresh, author A. Hankin, author J. P. Gaebler, author D. Francois, author A. Chernoguzov, author D. Lucchetti, author N. C. Brown, author T. M. Gatterman, author S. K. Halit, author K. Gilmore, author J. A. Gerber, author B. Neyenhuis, author D. Hayes, and author R. P. Stutz, https://doi.org/10.1103/PhysRevX.11.041058 journal journal Phys. Rev. X volume 11, pages 041058 (year 2021)NoStop [Marques et al.(2022)Marques, Varbanov, Moreira, Ali, Muthusubramanian, Zachariadis, Battistel, Beekman, Haider, Vlothuizen, Bruno, Terhal, and DiCarlo]Marques22 author author J. F. Marques, author B. M. Varbanov, author M. S. Moreira, author H. Ali, author N. Muthusubramanian, author C. Zachariadis, author F. Battistel, author M. Beekman, author N. Haider, author W. Vlothuizen, author A. Bruno, author B. M. Terhal, and author L. DiCarlo, https://doi.org/10.1038/s41567-021-01423-9 journal journal Nat. Phys. volume 18, pages 80 (year 2022)NoStop [Chen et al.(2021)Chen, Satzinger, Atalaya, Korotkov, Dunsworth, Sank, Quintana, McEwen, Barends, Klimov, Hong, Jones, Petukhov, Kafri, Demura, Burkett, Gidney, Fowler, Paler, Putterman, Aleiner, Arute, Arya, Babbush, Bardin, Bengtsson, Bourassa, Broughton, Buckley, Buell, Bushnell, Chiaro, Collins, Courtney, Derk, Eppens, Erickson, Farhi, Foxen, Giustina, Greene, Gross, Harrigan, Harrington, Hilton, Ho, Huang, Huggins, Ioffe, Isakov, Jeffrey, Jiang, Kechedzhi, Kim, Kitaev, Kostritsa, Landhuis, Laptev, Lucero, Martin, McClean, McCourt, Mi, Miao, Mohseni, Montazeri, Mruczkiewicz, Mutus, Naaman, Neeley, Neill, Newman, Niu, O'Brien, Opremcak, Ostby, Pató, Redd, Roushan, Rubin, Shvarts, Strain, Szalay, Trevithick, Villalonga, White, Yao, Yeh, Yoo, Zalcman, Neven, Boixo, Smelyanskiy, Chen, Megrant, Kelly, and AI]Chen21 author author Z. Chen, author K. J. Satzinger, author J. Atalaya, author A. N. Korotkov, author A. Dunsworth, author D. Sank, author C. Quintana, author M. McEwen, author R. Barends, author P. V. Klimov, author S. Hong, author C. Jones, author A. Petukhov, author D. Kafri, author S. Demura, author B. Burkett, author C. Gidney, author A. G. Fowler, author A. Paler, author H. Putterman, author I. Aleiner, author F. Arute, author K. Arya, author R. Babbush, author J. C. Bardin, author A. Bengtsson, author A. Bourassa, author M. Broughton, author B. B. Buckley, author D. A. Buell, author N. Bushnell, author B. Chiaro, author R. Collins, author W. Courtney, author A. R. Derk, author D. Eppens, author C. Erickson, author E. Farhi, author B. Foxen, author M. Giustina, author A. Greene, author J. A. Gross, author M. P. Harrigan, author S. D. Harrington, author J. Hilton, author A. Ho, author T. Huang, author W. J. Huggins, author L. B. Ioffe, author S. V. Isakov, author E. Jeffrey, author Z. Jiang, author K. Kechedzhi, author S. Kim, author A. Kitaev, author F. Kostritsa, author D. Landhuis, author P. Laptev, author E. Lucero, author O. Martin, author J. R. McClean, author T. McCourt, author X. Mi, author K. C. Miao, author M. Mohseni, author S. Montazeri, author W. Mruczkiewicz, author J. Mutus, author O. Naaman, author M. Neeley, author C. Neill, author M. Newman, author M. Y. Niu, author T. E. O'Brien, author A. Opremcak, author E. Ostby, author B. Pató, author N. Redd, author P. Roushan, author N. C. Rubin, author V. Shvarts, author D. Strain, author M. Szalay, author M. D. Trevithick, author B. Villalonga, author T. White, author Z. J. Yao, author P. Yeh, author J. Yoo, author A. Zalcman, author H. Neven, author S. Boixo, author V. Smelyanskiy, author Y. Chen, author A. Megrant, author J. Kelly, and author G. Q. AI, https://doi.org/10.1038/s41586-021-03588-y journal journal Nature volume 595, pages 383 (year 2021)NoStop [Andersen et al.(2020)Andersen, Remm, Lazar, Krinner, Lacroix, Norris, Gabureac, Eichler, and Wallraff]Andersen20 author author C. K. Andersen, author A. Remm, author S. Lazar, author S. Krinner, author N. Lacroix, author G. J. Norris, author M. Gabureac, author C. Eichler, and author A. Wallraff, https://doi.org/10.1038/s41567-020-0920-y journal journal Nat. Phys. volume 16, pages 875 (year 2020)NoStop [Krinner et al.(2022)Krinner, Lacroix, Remm, Di Paolo, Genois, Leroux et al.]Krinner22 author author S. Krinner, author N. Lacroix, author A. Remm, author A. Di Paolo, author E. Genois, author C. Leroux, et al., https://doi.org/10.1038/s41586-022-04566-8 journal journal Nature volume 605, pages 669 (year 2022)NoStop [Zhao et al.(2022)Zhao, Ye, Huang, Zhang, Wu, Guan, Zhu, Wei, He, Cao, Chen, Chung, Deng, Fan, Gong, Guo, Guo, Han, Li, Li, Li, Liang, Lin, Qian, Rong, Su, Sun, Wang, Wu, Xu, Ying, Yu, Zha, Zhang, Huo, Lu, Peng, Zhu, and Pan]Zhao22 author author Y. Zhao, author Y. Ye, author H.-L. Huang, author Y. Zhang, author D. Wu, author H. Guan, author Q. Zhu, author Z. Wei, author T. He, author S. Cao, author F. Chen, author T.-H. Chung, author H. Deng, author D. Fan, author M. Gong, author C. Guo, author S. Guo, author L. Han, author N. Li, author S. Li, author Y. Li, author F. Liang, author J. Lin, author H. Qian, author H. Rong, author H. Su, author L. Sun, author S. Wang, author Y. Wu, author Y. Xu, author C. Ying, author J. Yu, author C. Zha, author K. Zhang, author Y.-H. Huo, author C.-Y. Lu, author C.-Z. Peng, author X. Zhu, and author J.-W. Pan, https://link.aps.org/doi/10.1103/PhysRevLett.129.030501 journal journal Phys. Rev. Lett. volume 129, pages 030501 (year 2022)NoStop [Ofek et al.(2016)Ofek, Petrenko, Heeres, Reinhold, Leghtas, Vlastakis, Liu, Frunzio, Girvin, Jiang, Mirrahimi, Devoret, and Schoelkopf]Ofek16 author author N. Ofek, author A. Petrenko, author R. Heeres, author P. Reinhold, author Z. Leghtas, author B. Vlastakis, author Y. Liu, author L. Frunzio, author S. M. Girvin, author L. Jiang, author M. Mirrahimi, author M. H. Devoret, and author R. J. Schoelkopf, http://www.nature.com/nature/journal/v536/n7617/abs/nature18949.html journal journal Nature volume 536, pages 441 (year 2016)NoStop [Grimm et al.(2020)Grimm, Frattini, Puri, Mundhada, Touzard, Mirrahimi, Girvin, Shankar, and Devoret]Grimm20 author author A. Grimm, author N. E. Frattini, author S. Puri, author S. O. Mundhada, author S. Touzard, author M. Mirrahimi, author S. M. Girvin, author S. Shankar, and author M. H. Devoret, https://doi.org/10.1038/s41586-020-2587-z journal journal Nature volume 584, pages 205 (year 2020)NoStop [Campagne-Ibarcq et al.(2020)Campagne-Ibarcq, Eickbusch, Touzard, Zalys-Geller, Frattini, Sivak, Reinhold, Puri, Shankar, Schoelkopf, Frunzio, Mirrahimi, and Devoret]CampagneIbarcq20 author author P. Campagne-Ibarcq, author A. Eickbusch, author S. Touzard, author E. Zalys-Geller, author N. E. Frattini, author V. V. Sivak, author P. Reinhold, author S. Puri, author S. Shankar, author R. J. Schoelkopf, author L. Frunzio, author M. Mirrahimi, and author M. H. Devoret, https://doi.org/10.1038/s41586-020-2603-3 journal journal Nature volume 584, pages 368 (year 2020)NoStop [Sivak et al.(2023)Sivak, Eickbusch, Royer, Singh, Tsioutsios, Ganjam, Miano, Brock, Ding, Frunzio, Girvin, Schoelkopf, and Devoret]Sivak23 author author V. V. Sivak, author A. Eickbusch, author B. Royer, author S. Singh, author I. Tsioutsios, author S. Ganjam, author A. Miano, author B. L. Brock, author A. Z. Ding, author L. Frunzio, author S. M. Girvin, author R. J. Schoelkopf, and author M. H. Devoret, https://doi.org/10.1038/s41586-023-05782-6 journal journal Nature volume 616, pages 50 (year 2023)NoStop [Fowler et al.(2012b)Fowler, Whiteside, and Hollenberg]Fowler12d author author A. G. Fowler, author A. C. Whiteside, and author L. C. L. Hollenberg, https://doi.org/10.1103/PhysRevLett.108.180501 journal journal Phys. Rev. Lett. volume 108, pages 180501 (year 2012b)NoStop [Fowler(2015)]Fowler15 author author A. G. Fowler, @noop journal journal Quantum Info. Comput. volume 15, pages 145–158 (year 2015)NoStop [Higgott and Gidney(2023)]Higgott23 author author O. Higgott and author C. Gidney, @noop title Sparse blossom: correcting a million errors per core second with minimum-weight matching (year 2023), https://arxiv.org/abs/2303.15933 arXiv:2303.15933 [quant-ph] NoStop [Wu and Zhong(2023)]Wu2023 author author Y. Wu and author L. Zhong, @noop title Fusion blossom: Fast mwpm decoders for qec (year 2023), https://arxiv.org/abs/2305.08307 arXiv:2305.08307 [quant-ph] NoStop [Roffe et al.(2020)Roffe, White, Burton, and Campbell]Roffe20 author author J. Roffe, author D. R. White, author S. Burton, and author E. Campbell, https://doi.org/10.1103/PhysRevResearch.2.043423 journal journal Phys. Rev. Res. volume 2, pages 043423 (year 2020)NoStop [Criger and Ashraf(2018)]Criger18 author author B. Criger and author I. Ashraf, https://doi.org/10.22331/q-2018-10-19-102 journal journal Quantum volume 2, pages 102 (year 2018)NoStop [Higgott et al.(2022)Higgott, Bohdanowicz, Kubica, Flammia, and Campbell]Higgott22 author author O. Higgott, author T. C. Bohdanowicz, author A. Kubica, author S. T. Flammia, and author E. T. Campbell, @noop title Fragile boundaries of tailored surface codes and improved decoding of circuit-level noise (year 2022), https://arxiv.org/abs/2203.04948 arXiv:2203.04948 [quant-ph] NoStop [Caune et al.(2023)Caune, Camps, Reid, and Campbell]Caune23 author author L. Caune, author J. Camps, author B. Reid, and author E. Campbell, @noop title Belief propagation as a partial decoder (year 2023), https://arxiv.org/abs/2306.17142 arXiv:2306.17142 [quant-ph] NoStop [Bravyi et al.(2014)Bravyi, Suchara, and Vargo]Bravyi14 author author S. Bravyi, author M. Suchara, and author A. Vargo, https://doi.org/10.1103/PhysRevA.90.032326 journal journal Phys. Rev. A volume 90, pages 032326 (year 2014)NoStop [Chubb and Flammia(2021)]Chubb21 author author C. T. Chubb and author S. T. Flammia, @noop journal journal Annales de l’Institut Henri Poincaré D volume 8, pages 269 (year 2021)NoStop [Spitz et al.(2017)Spitz, Tarasinski, Beenakker, and O'Brien]Spitz17 author author S. Spitz, author B. M. Tarasinski, author C. Beenakker, and author T. O'Brien, https://doi.org/10.1002/qute.201800012 journal journal Advanced Quantum Technologies volume 1, pages 1800012 (year 2017)NoStop [Chen et al.(2022)Chen, Yoder, Kim, Sundaresan, Srinivasan, Li et al.]Chen22 author author E. H. Chen, author T. J. Yoder, author Y. Kim, author N. Sundaresan, author S. Srinivasan, author M. Li, et al., @noop journal journal Phys. Rev. Lett. volume 128, pages 110504 (year 2022)NoStop [Torlai and Melko(2017)]Torlai17 author author G. Torlai and author R. G. Melko, https://doi.org/10.1103/PhysRevLett.119.030501 journal journal Phys. Rev. Lett. volume 119, pages 030501 (year 2017)NoStop [Krastanov and Jiang(2017)]Krastanov17 author author S. Krastanov and author L. Jiang, https://doi.org/10.1038/s41598-017-11266-1 journal journal Scientific Reports volume 7, pages 11003 (year 2017)NoStop [Varsamopoulos et al.(2017)Varsamopoulos, Criger, and Bertels]Varsamopoulos18 author author S. Varsamopoulos, author B. Criger, and author K. Bertels, https://doi.org/10.1088/2058-9565/aa955a journal journal Quantum Science and Technology volume 3, pages 015004 (year 2017)NoStop [Baireuther et al.(2018)Baireuther, O'Brien, Tarasinski, and Beenakker]Baireuther18 author author P. Baireuther, author T. E. O'Brien, author B. Tarasinski, and author C. W. J. Beenakker, https://doi.org/10.22331/q-2018-01-29-48 journal journal Quantum volume 2, pages 48 (year 2018)NoStop [Chamberland and Ronagh(2018)]Chamberland18b author author C. Chamberland and author P. Ronagh, https://doi.org/10.1088/2058-9565/aad1f7 journal journal Quantum Science and Technology volume 3, pages 044002 (year 2018)NoStop [Baireuther et al.(2019)Baireuther, Caio, Criger, Beenakker, and O'Brien]Baireuther19 author author P. Baireuther, author M. D. Caio, author B. Criger, author C. W. J. Beenakker, and author T. E. O'Brien, https://doi.org/10.1088/1367-2630/aaf29e journal journal New Journal of Physics volume 21, pages 013003 (year 2019)NoStop [Andreasson et al.(2019)Andreasson, Johansson, Liljestrand, and Granath]Andreasson19 author author P. Andreasson, author J. Johansson, author S. Liljestrand, and author M. Granath, https://doi.org/10.22331/q-2019-09-02-183 journal journal Quantum volume 3, pages 183 (year 2019)NoStop [Ni(2020)]Ni20 author author X. Ni, https://doi.org/10.22331/q-2020-08-24-310 journal journal Quantum volume 4, pages 310 (year 2020)NoStop [Wagner et al.(2020)Wagner, Kampermann, and Bruß]Wagner20 author author T. Wagner, author H. Kampermann, and author D. Bruß, https://doi.org/10.1103/PhysRevA.102.042411 journal journal Phys. Rev. A volume 102, pages 042411 (year 2020)NoStop [Sheth et al.(2020)Sheth, Jafarzadeh, and Gheorghiu]Sheth20 author author M. Sheth, author S. Z. Jafarzadeh, and author V. Gheorghiu, https://doi.org/10.1103/PhysRevA.101.032338 journal journal Phys. Rev. A volume 101, pages 032338 (year 2020)NoStop [Varsamopoulos et al.(2020a)Varsamopoulos, Bertels, and Almudever]Varsamopoulos20a author author S. Varsamopoulos, author K. Bertels, and author C. Almudever, https://doi.org/10.1109/TC.2019.2948612 journal journal IEEE Transactions on Computers volume 69, pages 300 (year 2020a)NoStop [Varsamopoulos et al.(2020b)Varsamopoulos, Bertels, and Almudever]Varsamopoulos20b author author S. Varsamopoulos, author K. Bertels, and author C. G. Almudever, https://doi.org/10.1007/s42484-020-00015-9 journal journal Quantum Machine Intelligence volume 2, pages 3 (year 2020b)NoStop [Fitzek et al.(2020)Fitzek, Eliasson, Kockum, and Granath]Fitzek20 author author D. Fitzek, author M. Eliasson, author A. F. Kockum, and author M. Granath, https://doi.org/10.1103/PhysRevResearch.2.023230 journal journal Phys. Rev. Res. volume 2, pages 023230 (year 2020)NoStop [Sweke et al.(2020)Sweke, Kesselring, van Nieuwenburg, and Eisert]Sweke21 author author R. Sweke, author M. S. Kesselring, author E. P. L. van Nieuwenburg, and author J. Eisert, https://doi.org/10.1088/2632-2153/abc609 journal journal Machine Learning: Science and Technology volume 2, pages 025005 (year 2020)NoStop [Meinerz et al.(2022)Meinerz, Park, and Trebst]Meinerz22 author author K. Meinerz, author C.-Y. Park, and author S. Trebst, https://doi.org/10.1103/PhysRevLett.128.080505 journal journal Phys. Rev. Lett. volume 128, pages 080505 (year 2022)NoStop [Bausch et al.(2021)Bausch, Subramanian, and Piddock]Bausch21 author author J. Bausch, author S. Subramanian, and author S. Piddock, https://doi.org/10.1007/s42484-021-00041-1 journal journal Quantum Machine Intelligence volume 3, pages 16 (year 2021)NoStop [Ueno et al.(2022)Ueno, Kondo, Tanaka, Suzuki, and Tabuchi]Ueno22 author author Y. Ueno, author M. Kondo, author M. Tanaka, author Y. Suzuki, and author Y. Tabuchi, @noop title Neo-qec: Neural network enhanced online superconducting decoder for surface codes (year 2022), https://arxiv.org/abs/2208.05758 arXiv:2208.05758 [quant-ph] NoStop [Chamberland et al.(2022)Chamberland, Goncalves, Sivarajah, Peterson, and Grimberg]Chamberland22 author author C. Chamberland, author L. Goncalves, author P. Sivarajah, author E. Peterson, and author S. Grimberg, @noop title Techniques for combining fast local decoders with global decoders under circuit-level noise (year 2022), https://arxiv.org/abs/2208.01178 arXiv:2208.01178 [quant-ph] NoStop [Overwater et al.(2022)Overwater, Babaie, and Sebastiano]Overwater22 author author R. W. J. Overwater, author M. Babaie, and author F. Sebastiano, https://doi.org/10.1109/TQE.2022.3174017 journal journal IEEE Transactions on Quantum Engineering volume 3, pages 1 (year 2022)NoStop [Gicev et al.(2023)Gicev, Hollenberg, and Usman]Gicev23 author author S. Gicev, author L. C. L. Hollenberg, and author M. Usman, @noop title A scalable and fast artificial neural network syndrome decoder for surface codes (year 2023), https://arxiv.org/abs/2110.05854 arXiv:2110.05854 [quant-ph] NoStop [Zhang et al.(2023)Zhang, Ren, Xi, Zhang, Yu, Liu, Zhang, Zhang, and Zheng]Zhang23 author author M. Zhang, author X. Ren, author G. Xi, author Z. Zhang, author Q. Yu, author F. Liu, author H. Zhang, author S. Zhang, and author Y.-C. Zheng, @noop title A scalable, fast and programmable neural decoder for fault-tolerant quantum computation using surface codes (year 2023), https://arxiv.org/abs/2305.15767 arXiv:2305.15767 [quant-ph] NoStop [Egorov et al.(2023)Egorov, Bondesan, and Welling]Egorov23 author author E. Egorov, author R. Bondesan, and author M. Welling, @noop title The end: An equivariant neural decoder for quantum error correction (year 2023), https://arxiv.org/abs/2304.07362 arXiv:2304.07362 [quant-ph] NoStop [Gidney(2022)]Gidney22 author author C. Gidney, https://doi.org/10.22331/q-2022-08-24-786 journal journal Quantum volume 6, pages 786 (year 2022)NoStop [Krantz et al.(2019)Krantz, Kjaergaard, Yan, Orlando, Gustavsson, and Oliver]Krantz19 author author P. Krantz, author M. Kjaergaard, author F. Yan, author T. P. Orlando, author S. Gustavsson, and author W. D. Oliver, https://doi.org/10.1063/1.5089550 journal journal App. Phys. Rev. volume 6, pages 021318 (year 2019)NoStop [Blais et al.(2021)Blais, Grimsmo, Girvin, and Wallraff]Blais21 author author A. Blais, author A. L. Grimsmo, author S. M. Girvin, and author A. Wallraff, https://doi.org/10.1103/RevModPhys.93.025005 journal journal Rev. Mod. Phys. volume 93, pages 025005 (year 2021)NoStop [Pattison et al.(2021)Pattison, Beverland, da Silva, and Delfosse]Pattison21 author author C. A. Pattison, author M. E. Beverland, author M. P. da Silva, and author N. Delfosse, @noop title Improved quantum error correction using soft information (year 2021), https://arxiv.org/abs/2107.13589 arXiv:2107.13589 [quant-ph] NoStop [Tomita and Svore(2014)]Tomita14 author author Y. Tomita and author K. M. Svore, https://link.aps.org/doi/10.1103/PhysRevA.90.062320 journal journal Phys. Rev. A volume 90, pages 062320 (year 2014)NoStop [Gidney(2021)]Gidney21 author author C. Gidney, https://doi.org/10.22331/q-2021-07-06-497 journal journal Quantum volume 5, pages 497 (year 2021)NoStop [Varbanov and Serra-Peralta(2023a)]VarbanovSurfaceSim23 author author B. M. Varbanov and author M. Serra-Peralta, https://github.com/BorisVarbanov/surface-sim title surface-sim (year 2023a)NoStop [Abadi et al.(2015)Abadi, Agarwal, Barham, Brevdo, Chen, Citro, Corrado, Davis, Dean, Devin, Ghemawat, Goodfellow, Harp, Irving, Isard, Jia, Jozefowicz, Kaiser, Kudlur, Levenberg, Mané, Monga, Moore, Murray, Olah, Schuster, Shlens, Steiner, Sutskever, Talwar, Tucker, Vanhoucke, Vasudevan, Viégas, Vinyals, Warden, Wattenberg, Wicke, Yu, and Zheng]TensorFlowLibrary author author M. Abadi, author A. Agarwal, author P. Barham, author E. Brevdo, author Z. Chen, author C. Citro, author G. Corrado, author A. Davis, author J. Dean, author M. Devin, author S. Ghemawat, author I. Goodfellow, author A. Harp, author G. Irving, author M. Isard, author Y. Jia, author R. Jozefowicz, author L. Kaiser, author M. Kudlur, author J. Levenberg, author D. Mané, author R. Monga, author S. Moore, author D. Murray, author C. Olah, author M. Schuster, author J. Shlens, author B. Steiner, author I. Sutskever, author K. Talwar, author P. Tucker, author V. Vanhoucke, author V. Vasudevan, author F. Viégas, author O. Vinyals, author P. Warden, author M. Wattenberg, author M. Wicke, author Y. Yu, and author X. Zheng, http://download.tensorflow.org/paper/whitepaper2015.pdf title Tensorflow: Large-scale machine learning on heterogeneous distributed systems (year 2015)NoStop [Varbanov and Serra-Peralta(2023b)]VarbanovQrennd23 author author B. M. Varbanov and author M. Serra-Peralta, https://github.com/BorisVarbanov/qrennd title qrennd (year 2023b)NoStop [Hochreiter and Schmidhuber(1997)]Hochreiter97 author author S. Hochreiter and author J. Schmidhuber, https://doi.org/10.1162/neco.1997.9.8.1735 journal journal Neural Computation volume 9, pages 1735 (year 1997), https://arxiv.org/abs/https://direct.mit.edu/neco/article-pdf/9/8/1735/813796/neco.1997.9.8.1735.pdf https://direct.mit.edu/neco/article-pdf/9/8/1735/813796/neco.1997.9.8.1735.pdf NoStop [LeCun et al.(2015)LeCun, Bengio, and Hinton]LeCun2015 author author Y. LeCun, author Y. Bengio, and author G. Hinton, https://doi.org/10.1038/nature14539 journal journal Nature volume 521, pages 436 (year 2015)NoStop [Fowler(2013)]Fowler13b author author A. G. Fowler, https://arxiv.org/abs/1310.0863 journal journal arXiv:1310.0863 (year 2013)NoStop [Roffe(2022)]RoffeLDPCPythonTools22 author author J. Roffe, https://pypi.org/project/ldpc/ title LDPC: Python tools for low density parity check codes (year 2022)NoStop [Sank et al.(2016)Sank, Chen, Khezri, Kelly, Barends, Campbell, Chen, Chiaro, Dunsworth, Fowler, Jeffrey, Lucero, Megrant, Mutus, Neeley, Neill, O'Malley, Quintana, Roushan, Vainsencher, White, Wenner, Korotkov, and Martinis]Sank16 author author D. Sank, author Z. Chen, author M. Khezri, author J. Kelly, author R. Barends, author B. Campbell, author Y. Chen, author B. Chiaro, author A. Dunsworth, author A. Fowler, author E. Jeffrey, author E. Lucero, author A. Megrant, author J. Mutus, author M. Neeley, author C. Neill, author P. J. J. O'Malley, author C. Quintana, author P. Roushan, author A. Vainsencher, author T. White, author J. Wenner, author A. N. Korotkov, and author J. M. Martinis, https://doi.org/10.1103/PhysRevLett.117.190503 journal journal Phys. Rev. Lett. volume 117, pages 190503 (year 2016)NoStop [Khezri et al.(2022)Khezri, Opremcak, Chen, Bengtsson, White, Naaman, Acharya, Anderson, Ansmann, Arute, Arya, Asfaw, Bardin, Bourassa, Bovaird, Brill, Buckley, Buell, Burger, Burkett, Bushnell, Campero, Chiaro, Collins, Crook, Curtin, Demura, Dunsworth, Erickson, Fatemi, Ferreira, Burgos, Forati, Foxen, Garcia, Giang, Giustina, Gosula, Dau, Hamilton, Harrington, Heu, Hilton, Hoffmann, Hong, Huang, Huff, Iveland, Jeffrey, Kelly, Kim, Klimov, Kostritsa, Kreikebaum, Landhuis, Laptev, Laws, Lee, Lester, Lill, Liu, Locharla, Lucero, Martin, McEwen, Megrant, Mi, Miao, Montazeri, Morvan, Neeley, Neill, Nersisyan, Ng, Nguyen, Nguyen, Potter, Quintana, Rocque, Roushan, Sankaragomathi, Satzinger, Schuster, Shearn, Shorter, Shvarts, Skruzny, Smith, Sterling, Szalay, Thor, Torres, Woo, Yao, Yeh, Yoo, Young, Zhu, Zobrist, Sank, Korotkov, Chen, and Smelyanskiy]Khezri22 author author M. Khezri, author A. Opremcak, author Z. Chen, author A. Bengtsson, author T. White, author O. Naaman, author R. Acharya, author K. Anderson, author M. Ansmann, author F. Arute, author K. Arya, author A. Asfaw, author J. C. Bardin, author A. Bourassa, author J. Bovaird, author L. Brill, author B. B. Buckley, author D. A. Buell, author T. Burger, author B. Burkett, author N. Bushnell, author J. Campero, author B. Chiaro, author R. Collins, author A. L. Crook, author B. Curtin, author S. Demura, author A. Dunsworth, author C. Erickson, author R. Fatemi, author V. S. Ferreira, author L. F. Burgos, author E. Forati, author B. Foxen, author G. Garcia, author W. Giang, author M. Giustina, author R. Gosula, author A. G. Dau, author M. C. Hamilton, author S. D. Harrington, author P. Heu, author J. Hilton, author M. R. Hoffmann, author S. Hong, author T. Huang, author A. Huff, author J. Iveland, author E. Jeffrey, author J. Kelly, author S. Kim, author P. V. Klimov, author F. Kostritsa, author J. M. Kreikebaum, author D. Landhuis, author P. Laptev, author L. Laws, author K. Lee, author B. J. Lester, author A. T. Lill, author W. Liu, author A. Locharla, author E. Lucero, author S. Martin, author M. McEwen, author A. Megrant, author X. Mi, author K. C. Miao, author S. Montazeri, author A. Morvan, author M. Neeley, author C. Neill, author A. Nersisyan, author J. H. Ng, author A. Nguyen, author M. Nguyen, author R. Potter, author C. Quintana, author C. Rocque, author P. Roushan, author K. Sankaragomathi, author K. J. Satzinger, author C. Schuster, author M. J. Shearn, author A. Shorter, author V. Shvarts, author J. Skruzny, author W. C. Smith, author G. Sterling, author M. Szalay, author D. Thor, author A. Torres, author B. W. K. Woo, author Z. J. Yao, author P. Yeh, author J. Yoo, author G. Young, author N. Zhu, author N. Zobrist, author D. Sank, author A. Korotkov, author Y. Chen, and author V. Smelyanskiy, @noop title Measurement-induced state transitions in a superconducting qubit: Within the rotating wave approximation (year 2022), https://arxiv.org/abs/2212.05097 arXiv:2212.05097 [quant-ph] NoStop [Lange et al.(2023)Lange, Havström, Srivastava, Bergentall, Hammar, Heuts, van Nieuwenburg, and Granath]Lange23 author author M. Lange, author P. Havström, author B. Srivastava, author V. Bergentall, author K. Hammar, author O. Heuts, author E. van Nieuwenburg, and author M. Granath, @noop title Data-driven decoding of quantum error correcting codes using graph neural networks (year 2023), https://arxiv.org/abs/2307.01241 arXiv:2307.01241 [quant-ph] NoStop [Varbanov et al.(2023)Varbanov, Serra-Peralta, Byfield, and Terhal]VarbanovFigureData23 author author B. M. Varbanov, author M. Serra-Peralta, author D. Byfield, and author B. M. Terhal, https://doi.org/10.5281/zenodo.8108286 title Data supporting "Neural network decoder for near-term surface-code experiments" (year 2023)NoStop [Delft High Performance Computing Centre (DHPC)(2022)]DHPC2022 author author Delft High Performance Computing Centre (DHPC), @noop title DelftBlue Supercomputer (Phase 1), howpublished <https://www.tudelft.nl/dhpc/ark:/44463/DelftBluePhase1> (year 2022)NoStop
http://arxiv.org/abs/2307.02414v1
20230703115755
A Multi-Agent Deep Reinforcement Learning Approach for RAN Resource Allocation in O-RAN
[ "Farhad Rezazadeh", "Lanfranco Zanzi", "Francesco Devoti", "Sergio Barrachina-Munoz", "Engin Zeydan", "Xavier Costa-Pérez", "Josep Mangues-Bafalluy" ]
cs.NI
[ "cs.NI" ]
gobble verbose,tmargin=0.75in,bmargin=1in,lmargin=0.625in,rmargin=0.625in KwByby Figures/ theoremTheorem corollaryCorollary defnDefinition assumAssumption remarkRemark 5 addpunct.addpunct:
http://arxiv.org/abs/2307.00551v1
20230702120736
Mesoscopic Impurities in Generalized Hydrodynamics
[ "Friedrich Hübner" ]
cond-mat.stat-mech
[ "cond-mat.stat-mech" ]
Mesoscopic Impurities in GHD]Mesoscopic Impurities in Generalized Hydrodynamics Department of Mathematics, King’s College London, Strand, WC2R 2LS London, UK friedrich.huebner@kcl.ac.uk We study impurities in integrable models from the viewpoint of generalized hydrodynamics (GHD). An impurity can be thought of as a boundary condition for the GHD equation, relating the state on the left and right side. We find that in interacting models it is not possible to disentangle incoming and outgoing states, which means that it is not possible to think of scattering as a mapping which maps the incoming state to the outgoing state. We then introduce a novel class of impurities, dubbed mesoscopic impurities, whose spatial size is mesoscopic (i.e. their size L_micro≪ L_imp≪ L is much larger than the microscopic length scale L_micro, but much smaller than the system size L). Due to their large size it is possible to describe mesoscopic impurities via GHD. This simplification allows one to study these impurities both analytically and numerically. These impurities show interesting non-perturbative scattering behavior, for instance non-uniqueness of solutions and a non-analytic dependence on the impurity strength. In models with one quasi-particle species and a scattering phase shift that depends on the difference of momenta only, we find that one can describe the scattering using an effective Hamiltonian. This Hamiltonian is dressed due to the interaction between particles and satisfies a self consistency fixed point equation. On the example of the hard rods model we demonstrate how this fixed point equation can be used to find explicit solutions to the scattering problem. [ Friedrich Hübner August 1, 2023 ==================== § INTRODUCTION Integrable models are an important set of fine-tuned (mostly one-dimensional) models in theoretical physics which can be solved exactly. This allows for the analytical study of the effect of interactions in these quantum or classical many-body systems in much greater detail than it would be possible in generic models. At first, it may seem that these fine-tuned models are not necessarily good models for actual physical systems. In particular, they show the unusual feature that they posses infinitely many (local) conserved quantities, while a generic system typically has only 3: particle number, momentum and energy. For instance, this manifests in the equilibrium the system approaches after a long time: it is characterized by a generalized Gibbs ensemble (GGE) <cit.>, which takes into account all conserved quantities instead of the usual Gibbs ensembles. This failure of thermalization has been observed in experiments, such as the famous quantum-Newton cradle experiment <cit.> and others <cit.>. However, in one dimension, also non-integrable systems often show effects linked to integrability (like long equilibration times, the most famous example here is the FPUT system <cit.>). The reason for that is that often these systems are close to an integrable model and therefore for short times they can be treated as integrable models (this also explains why experiments – which are never precisely fine-tuned – still can observe GGE's). Only after waiting for long times eventually integrability breaking effects will be observed. Therefore understanding the physics of integrable models provides a first step to understanding the physics of general many-body systems in one dimension. This idea has sparked huge interest in integrable systems, and in particular there has been plenty of work on how to perturb these systems with integrability breaking perturbations, see for instance <cit.>. In the last decade the progress on understanding integrable models was further fueled due to the establishment of a powerful theoretical toolbox: generalized hydrodynamics (GHD) <cit.>. Generalized hydrodynamics is a framework to study large scale dynamics of integrable models (for a pedagogical introduction, see <cit.>). This provides access to the out-of-equilibrium dynamics of these models, which is particularly useful to compare with experiments <cit.>. GHD is a coarse-grained theory which describes the state of a system locally only by its conversed quantities, all other degrees of freedom are averaged out. This provides an ideal starting ground for studying integrability breaking, since integrability breaking manifests through the breaking of conservation laws. In order to be able to account for an integrability breaking effect in GHD it has to be small in some sense. Small could for instance mean a) that the coupling to the integrability breaking terms is weak. A perturbative treatment typically gives rise to Boltzmann-type equations and has been used to describe various experimentally relevant effects, like the effect of particle losses <cit.>, the effect of being only a quasi 1D system <cit.> and an integrability breaking background <cit.>. In GHD it is also possible to go beyond weak perturbations and to consider certain strong perturbations, which b) vary slowly in space and time. This includes the experimentally important case of an external trapping potential <cit.> or also slowly varying interactions <cit.>. Another option to perturb an integrable model is c) to introduce impurities: Impurities can be potentially strong perturbations, but they will only affect the system in a small localized region. Therefore, the system can still be described by GHD away from the impurity, but at the position of the impurity there will be a non-trivial boundary condition relating the state to the left and the right of the impurity. Unlike other forms of integrability breaking, it has been observed they not lead to thermalization at late times <cit.>. Impurities have been long studied in the context of integrable models both analytically <cit.> starting from the infamous Kondo impurity (see for instance the review <cit.>) and experimentally <cit.>. In this paper we study the out-of-equilibrium dynamics of integrable models in the prescence of an impurity from a generalized hydrodynamics perspective. The situation we have in mind is quite general: Given the initial state ρ(t=0,x,p) at time t=0 we want to predict the state ρ(t,x,p) at some later time t taking into account the impurity. First, we formalize and unify ideas already mentioned in the literature <cit.> to establish a coherent picture for the incorporation of impurities into GHD: The boundary conditions correspond to non-equilibrium stationary states (NESS) of the impurity. In our discussion we will find that the classification of impurities can be quite peculiar: Due to their interaction the reflected particles affect the effective velocities of the incoming particles, implying that it is not possible to think of the outgoing state as a function of the incoming state (unless the model is free)[Note that if one would apply ordinary scattering theory to the impurity, this problem would not occur. There one can define the S-matrix which maps incoming states to outgoing states. The difference is that ordinary scattering theory corresponds to a infinite time limit t ≫ N, s.t. eventually all particles are non-interacting. In GHD however, we look at times t ∼ N, implying that particles are still interacting.]. Because of that it is also not clear whether a boundary condition will always exist or whether it is unique (in fact we will give an example where they it is unique). Another problem for actual computations is that even if the NESS would be known, one would still need to map the states on the left and right onto a GGE, which requires the computation of expectation values of charge densities <cit.>. Unless the model is non-interacting, this is an incredibly cumbersome task. Note that the problems discussed here are not problems connected with specific impurities, but rather problems connected to the interacting nature of the integrable model. In this paper we would like to put forward the understanding of the peculiarities of the boundary conditions for impurities in interacting models. We already discussed that the effects mentioned above are not present in free models, which were studied extensively in the non-equilibrium context <cit.>. We also do not expect them to appear the perturbative treatment applicable to weak impurities <cit.>, since effects like non-uniqueness are often non-perturbative in nature (and indeed we will observe this at an explicit example). They should appear in integrable impurities, but they are quite fine-tuned and analytical computations on them are involved (also note the recent work on how to incorporate them into GHD <cit.>). Instead, for our purposes, we are seeking a broad class of impurities, which a) exists for any integrable model, b) includes strong impurities, c) can be treated analytically and numerically and d) can easily be connected to the GHD language. In the second part of this paper we introduce a class of impurities which satisfies all of these requirements: mesoscopic impurities. Mesoscopic impurities are impurities whose spatial scale is mesoscopic, i.e. they are very large (and slowly varying) impurities, but still much smaller than the system size. We choose them to be linear combinations of charge densities (note that this is a major difference from the impurity studied in  <cit.>, which is also mesoscopic in size). Due to their large spatial size one can describe such impurities using the framework of generalized hydrodynamics itself. This drastic simplification will allow us to study them both analytically and numerically in efficient manners. They are a universal description for large scale impurities built from charge-densities and can be used to model non-perturbative impurities in interacting integrable models. Furthermore, the nonequilibrium stationary states are directly available in the language of GHD. Beside this we also expect that it should be possible to implement them in actual experiments, for instance in cold atom setups. Even more, following basic ideas from generalized hydrodynamics, one can view the regime of mesoscopic impurities also as a zero'th order term of an expansion in 1/Limp, where Limp is the spatial size of the impurity. In this sense our results can also be a starting point to systematically gain insights into smaller impurities by taking diffusion (or further higher order corrections) to GHD into account. To keep the discussion simple we restrict ourselves to models in which φ(p,q) = φ(p-q) is a function of the difference only. This includes important classical and quantum mechanical systems, like hard rods and the Lieb-Liniger model. In these models we find an intuitive description of scattering at mesoscopic impurities: One can view the interacting system as non-interacting particles evolving in an effective Hamiltonian. The effective Hamiltonian is the single particle bare Hamiltonian plus a correction coming from the interaction of the particles. We show that this effective Hamiltonian satisfies a functional fixed point equation, which provides a convenient starting point for further analysis and explicit solutions. We use our analysis to establish the following general properties of scattering at mesoscopic impurities: * Scattering at mesoscopic impurities is invariant under local spatial rescalings of the impurity. * Scattering identically vanishes if the strength of the impurity is below a certain cutoff. * The trajectory of particles at the impurity are deterministic. This means that all particles with a certain momentum p are eighter transmitted or reflected. * The solution to the scattering problem (i.e. the boundary condition) is not necessarily unique. We further demonstrate how the fixed point equation can be used to obtain analytical solutions to the scattering problem at the example of hard rods scattering at a potential barrier, particles only approaching from one side. In the case where the length of the rods d>0 is positive the solution to impurity problem is unique. However, this is not always the case as we demonstrate for hard rods with negative length d<0 (negative length corresponds to a time-delay during scattering). We give an explicit example with two possible stable solutions. We close the paper with a demonstration of our initial assumption of this paper, namely that replacing the impurity by a boundary condition of the GHD equation indeed gives the correct large scale evolution: For a simple impurity we simulate hard rods scattering at an impurity starting from some large scale initial state at time t=0. At a later time t=T we compare the result to the result obtained from simulating the GHD with impurity boundary conditions and show that they coincide. For the simulation of the GHD equation and in particular the boundary condition we use an efficient algorithm outlined in <ref>. The paper is structured as follows: In Section <ref> we discuss the relation between the impurity and the boundary condition in general. In Section <ref> we then introduce mesoscopic impurities and derive the effective Hamiltonian and its fixed point equation in Section <ref>. In Section <ref> we study the explicit example of hard rods scattering at an potential. In <ref> we also describe an efficient numerical algorithm to solve the scattering at mesoscopic impurities. § GENERAL CONSIDERATIONS Given a system which consists of an (translation invariant) integrable model and an impurity we would like to understand the behavior of the system on large scales. First, let us specify what we mean by an impurity: An impurity is a (possibly strong) localized perturbation of the integrable model whose spatial size Limp is much smaller than the macroscopic scale L ≫ Limp. This is contrary to the case of an external potential, which is a perturbation whose spatial size Lext is comparable to the macroscopic scale L ∼ Lext. In this paper we will restrict ourselves to the case where the system far on the left and far on the right is equivalent. However, the same discussion and the tools can be extended to the case where the system on both sides is different. A simple example of that is the case of a potential V(x) which decays to different values V(∞) ≠ V(-∞) as x →±∞. A more complicated example would be systems with different interactions on both sides of the impurity. In such a situation the impurity acts as the connection between two different systems. Also note that our analysis also applies to boundaries (seen as a limiting case where one system is non-existent). In this paper we want to study the effect of an impurity in a non-equilibrium setting. The problem is as follows: Given an initial state of the system at time t=0, predict the state of the system at some later time. In interacting integrable models already the evolution without any impurity is complicated and in most cases cannot be computed explicitly. However, in many interesting cases and experiments (for instance in cold atom systems <cit.>) the system size L is much larger than the microscopic lengthscale of the particle, the system is observed on long timescales compared to the microscopic timescale and also the system contains many particles. In this case one can describe the out-of-equilibrium dynamics on large scales integrable models by generalized hydrodynamics <cit.>. The (Euler scale) generalized hydrodynamics equation can be stated in two ways: In a conservation form where it describes the quasi-particle density ρ(t,x,p) ∂_t ρ + ∂_x (veffρ) = 0 or in a transport form where it describes the occupation function n(t,x,p) <cit.>: ∂_t n + veff∂_x n = 0. Both equations are linked via: ρ = 1/2π1dr n veff = (∂_p E)dr/1dr where E(p) is the bare energy of a quasi-particle and the dressing operation is defined as the solution to: fdr(p) = f(p) + ∫q2πφ(p,q) n(t,x,q) fdr(q) = f(p) + T n(t,x,q) fdr(q). Here we defined the operator T, whose kernel T(p,q) = 12πφ(p,q) is given by the scattering phase shift, which depends on the model. One can interpret 12πφ(p,q) as the effective displacement which occurs when two particles with momenta p and q interact with each other (we will assume that φ(p,q) is symmetric). When φ(p,q) is positive the interaction corresponds to a time-delay or a backwards displacement of the trajectory of the particle. When φ(p,q) is negative the interaction speeds up the particle and thus corresponds to a forward displacement of the trajectories. Note that the above equations are written for an integrable model with a single particle species. It is also possible to formulate them for multiple particle species <cit.>. So far we discussed the system without an impurity. By definition the impurity is way smaller than the macroscopic scale and thus from the GHD viewpoint the impurity will shrink to a point. Throughout this paper we will place the impurity at x = 0. For x ≠ 0 the system is still described by the GHD equation, but at x = 0 the impurity will lead to some boundary condition, which relates nL(p) = ρ(0^-,p) and nR(p) = ρ(0^+,p). We can now write the GHD equation (e.g. in transport form) including the impurity as: ∂_t n + veff∂_x n = 0 x≠ 0 (nL,nR) ∈ℳ x = 0 where ℳ denotes the set of all physically allowed relations between nL and nR. Note that we choose to express the boundary conditions in terms of the occupation function n and not ρ (see discussion at the end of this section). How can one characterize the set ℳ? For that we need to study the scattering problem at the impurity. The idea is that since the impurity is small Limp≪ L, the timescales for scattering at the impurity are also way smaller than the macroscopic timescales. Thus, at the scale of the impurity we study a long time limit where we can send timp→∞. Also since nL and nR change on the macroscopic time-scale (i.e. slowly), we can assume them as constant during scattering. This is the usual assumption on scattering at impurities in general and is, for instance, at the base of scattering theory in quantum mechanics. In the limit timp→∞ the solution converges to a stationary solution of the equation of motion. A prominent example of this is the Lippmann-Schwinger equation in quantum mechanics where the scattering state is an eigenstate of the Hamiltonian (and thus stationary in time) <cit.>. We will also see this feature when we study mesoscopic impurities. The scattering problem of the impurity therefore consists of finding a solution to the stationary equations of motion of the impurity system, so called non-equilibrium stationary states (NESS). This is well known, also in the context of GHD, and these NESS have been constructed and studied in special cases <cit.>. Given such a NESS, one can construct the boundary conditions in the following way: Evaluating a NESS far away from the impurity ximp→±∞ it will become a solution to the unperturbed system. From the GHD viewpoint this state should correspond to a GGE, which is characterised by its expectation values of the charge densities. Therefore, one has to compute the expectation values of charge densities far away from the impurity. Given those expectation values one can in principle construct a GGE, which corresponds to a quasi-particle density ρ(p) or alternatively to an occupation function n(p) in GHD. The n(p) obtained by this procedure are the boundary conditions nL and nR. To summarize we look for a solution to the stationary impurity system, a NESS whose asymptotics for ximp→±∞ are described by nR/L. The set of all those possible states then determines ℳ. Usually, one does not describe the boundary condition just as a set ℳ, but one views the outgoing state as a function of the incoming state nout[nin]. For instance, in quantum mechanics one defines the S matrix which maps an incoming state to the corresponding outgoing state. This is a natural and physically intuitive description of the scattering at the impurity. Unfortunately, viewing the outgoing state as a function of the incoming state is not possible in interacting models. The fundamental reason is that the effective velocity of particles gets affected by the presence of all other particles, incoming as well as outgoing. Therefore, given for instance the solution nL we can determine which particles are incoming and which are outgoing on the left side. But if we do not know the solution we do not know which particles are incoming a priori. In addition to that, even in cases where it is clear which particles are incoming, it is not clear whether the solution to the scattering problem is unique. In fact, we will demonstrate the non-uniqueness at an explicit example, see Section <ref>. To avoid potential confusion, this non-uniqueness refers to the non-uniqueness of boundary conditions. If one applies ordinary scattering theory to any impurity, it is always possible to define a S-matrix, which maps incoming states to outgoing states in a unique way. However, ordinary scattering theory is based on a long time limit and describes the incoming and outgoing states in terms of asymptotic states. These asymptotic states form only at very long times |t| ≫ N where the particles have separated so much that they do not interact anymore. In GHD on the other hand, we consider much shorter times t∼ N: The states to the right and to the left of the impurity are finite density states, where particles are still interacting. In fact, this also shows that the S matrix is not the correct object to describe scattering on hydrodynamic timescales of interacting integrable models. Due to the peculiarities outlined in the last paragraph, in general, we need to describe the boundary conditions by a set ℳ. However, there are some restrictions on that set based on basic symmetries and conservation laws. For instance, if the impurity is PT symmetric then if (nL(p),nR(p)) ∈ M, also (nR(-p),nL(-p)) ∈ M. Unless the impurity is integrable as well, the impurity will break some, if not all, conservation laws. For a typical impurity we can expect that it will conserve the total particle number and the total energy. In this case we have the restrictions ∫pveffLρL(p) = ∫pveffRρR(p) and ∫pE(p)veffLρL(p) = ∫pE(p)veffRρR(p). However, in principle also particle number or energy conservation violating impurities are possible. We finish this general discussion with another subtlety which only appears in interacting models: In principle one can express the set ℳ either in terms of ρ or in terms of n. Both are possible, however, we find that n is better suited for the following reason: Consider the unperturbed model and take as initial state two bumps: a bump on the left side with positive velocity and a bump on the right side with negative velocity. When the system evolves the two bumps will collide. During the collision n(p) will be transported along some complicated trajectory of the quasi-particle, but the value n(p) will not change. The quasi-particle density ρ = 12π1dr n, however, will change along the trajectory, since it is multiplied by 1dr, which depends on the presence of other particles as well. The same effect appears during the scattering at the impurity: The reflected particles will alter the shape of the incoming ρ, but not of the incoming n. Therefore n is better suited to describe an incoming state. § MESOSCOPIC IMPURITES In general, microscopic impurities can become very complicated and break many conservation laws, which is why we cannot hope to find a general solution to their scattering problem. Usually one can only study very specific integrable impurities or try to study the problem perturbatively in the impurity strength. Even then, in order to connect with GHD, one has to compute the expectation values of all charge densities, which is incredibly cumbersome in interacting models <cit.>. Therefore, most computations so far restricted to free models. As we discussed, in free models one does not encounter the delicate problems for interacting models outlined in the last section. In order to understand them it would be instructive to have a class of impurities, which can be treated analytically for interacting models and in particular whose solutions can be easily connected to the GHD language. In the remainder of the paper we will introduce such a class of impurities, which we call mesoscopic impurities. This class of impurities exists for any integrable model, therefore allowing to model and study impurity effects quite generally. The idea is to study impurities which are large in size (compared to microscopic), but smaller than the macroscopic length scale of the system, i.e. Limp∼ L^γ, where 12 < γ < 1 (we require γ > 12 in order to avoid diffusive effects). To be precise we consider impurities which are combinations of charge densities V = ∫xV_n(xLimp) q_n(x), where q_n(x) are the charge densities of the integrable model. We dub these impurities mesoscopic impurities, since their size is mesoscopic. On the length scale Limp we can still divide space in small fluid cells in which we can assume that local relaxation to a GGE has already happened. Since this assumption is the basic assumption of GHD, this in fact means we can fully describe the impurity using the GHD framework alone. This is a great simplification. Of course the results only hold for very large impurities. But still they provide a first insight into general features of impurities in GHD and also can be seen as a first approximation to an actual impurity. In order to study smaller impurities the next step would be to also take diffusive corrections into account, which would give access to effects of impurities on scales Limp∼ L^γ for 13 < γ < 12. By adding more and more orders of the gradient expansion one can thus access smaller and smaller impurities. However, it is not clear whether the gradient expansion will give rise to a convergent series, so potentially it will fail to describe microscopic impurities. In order to find an equation describing the mesoscopic impurity consider the general GHD equation for a space dependent bare energy function E(x,p) <cit.>: ∂_t (1drn) + ∂_x((∂_p E)drn) -∂_p((∂_x E)dr n) = 0 which is written in conservation form, but using the occupation function n. In our case E(x,p) = E_0(p) + V(x/ℓ,p), which is connected to V_n(x) = ∫pV(x,p)h_n(p), where h_n(p) is the one-particle eigenvalue of the n'th charge. Also ℓ = LimpL = L^1-γ is the size of the impurity as seen from the macroscopic system. We now want to study the above equation for small ℓ→ 0. For that let us change the position variable to x → x ℓ. This yields: ℓ∂_t (1drn) + ∂_x((∂_p E)drn) - ∂_p((∂_x E)dr n) = 0 and by taking the limit ℓ→ 0 we find the stationary GHD equation: ∂_x((∂_p E)drn) -∂_p((∂_x E)dr n) = 0. Note that this is a differential equation which requires boundary conditions at x→±∞. These are given by the incoming/outgoing data nL(p) = n(-∞,p) and nR(p) = n(∞,p). This data has to be given externally. Again we have the problem that in order to determine which particles are incoming we need to know the complete state n(±∞,p). This means we cannot specify the boundary conditions directly, but rather have to assume the form of n(±∞, p) at infinity, then solve the stationary GHD equation using the incoming data and then check whether the outgoing data coincides with the incoming data. We conclude that the impurity problem for a mesoscopic impurity reduces to the scattering problem of GHD at the impurity potential. Let us summarize some basic properties of solutions to the stationary GHD equation (<ref>). §.§ Particle number and energy conservation We can establish particle number conservation by integrating (<ref>) over p: ∂_x ∫p (∂_p E)dr(x,p) n(x,p) = 2π∂_x ∫p veff(x,p) ρ(x,p) = 0 which means that the total current ∫p veff(x,p) ρ(x,p) is independent of x and thus the outgoing current is equal to the incoming current. Similarly we can establish energy conservation by showing that the energy current vanishes: x∫p E(x,p) (∂_p E)dr(x,p)n(x,p) = ∫p∂_x E(x,p) (∂_p E)dr(x,p)n(x,p) + ∫p E(x,p) ∂_x [(∂_p E)dr(x,p)n(x,p)] = ∫p∂_x E(x,p) (∂_p E)dr(x,p)n(x,p) - ∫p∂_p E(x,p) (∂_x E)dr(x,p)n(x,p) = 0 where the last line vanishes due to the well-known symmetry property of the dressing ∫p fngdr = ∫p fdrng (see for instance  <cit.> equation (117)). §.§ Rescaling invariance There is also another immediate property following from the way we derived the stationary GHD equation: Consider a global rescaling of space V(x,p) → V(λ x,p). The solution to the stationary GHD equation in the rescaled potential is simply given by n(λ x, p). This can either be checked explicitly or can simply be observed by noting that if ℓ in derivation (<ref>) was a mesoscopic scale then ℓ/λ is a mesoscopic scale as well and thus the resulting GHD equation and its solution must be the same, only rescaled. This shows that the stationary GHD equation is invariant under global rescaling of space. Interestingly the same is true for local rescaling as well: Consider a smooth (or at least differentiable) map y: ℝ→ℝ s.t. y(x→±∞) = ±∞. Then the stationary GHD equation with potential V(y(x),p) is solved by n(y(x),p). This can be seen rather easily: We have ∂_p E(y(x),p) = ∂_p E_0(p) + ∂_pV(y(x),p) ∂_x E(y(x),p) = ∂_xV(y(x),p)y'(x). Since the dressing operation does not depend on x we simply find their dressed expressions evaluated at y(x): (∂_p E)dr = (∂_p E_0(p))dr(y(x),p) + (∂_pV)dr(y(x),p) (∂_x E)dr = (∂_xV)dr(y(x),p)y'(x). We also find that 1dr = 1dr(y(x),p). Now inserting everything into the stationary GHD equation we immediately find: [∂_x((∂_p E)drn) -∂_p((∂_x E)dr n)]_x→ y(x)y'(x) = 0. Therefore n(y(x),p) is a solution to the locally rescaled GHD equation. Note that y(x) is not required to be monotonically increasing. It is allowed to go back in space. This rescaling property can be used to simplify problems. For instance, consider an impurity potential which depends only on space V(x) and is non-negative and has a single maximum of V̅ = max_x V(x) which is obtained at x_0. Then we can start from the potential: Vtri(x) = V̅1-x x < 1 0 else. Now define y(x) as: y(x) = V̅-V(x)/V̅x-x_0. Then Vtri(y(x)) = V(x) and thus we can obtain the solutions to all these potentials simply by solving one potential Vtri. If V(x) only has no other local maxima than the global one this also produces the expected result. However, if V(x) has some local minima, y(x) is not monotonically increasing: We still find a solution to the stationary GHD equation, but the minima are filled with some quasi-particles, which are trapped in the minima (see Figure <ref>). We usually assume the impurity to be empty before scattering, but in principle one could also consider a filled impurity. §.§ Impact of the sign of T(p,q) In general, T(p,q) describes the effective displacement on the trajectories of two quasi-particles after they scattered. If it is negative it describes a sudden forward jump of the quasi-particles during scattering (like in the hard rods case T(p,q)=-d2π). Positive T(p,q) (like in the Lieb-Liniger model) physically corresponds to a time-delay during scattering, but can be thought off as a backward displacement of the particles (like in the flea gas algorithm <cit.>). Imagine a particle moving `up-hill' at a potential V(x) and consider the case where it scatters with a reflected particle going `down-hill'. If T(p,q) is negative then the two particles will be displaced forward, meaning that the incoming particle gains potential energy Δ E = V'(x) 2π |T(p,q)|, while the reflected particle looses this energy. Since this effectively lowers the height of the potential barrier for the incoming particle, we conclude that negative T(p,q) tends to help incoming particles to pass a potential barrier. For T(p,q)>0 the situation is reversed. Now both particles are displaced backwards, meaning the incoming particle looses potential energy, while the reflected one gains it. Therefore the potential barrier is effectively higher. Note that this also has an impact on the uniqueness of solutions. For T(p,q) < 0 reflected particles will help other particles to pass the barrier, which leads to less reflected particles. Therefore, there is a competition between the number of reflected particles and their effect on the incoming particles, which should intuitively lead to a stable equilibrium. For T(p,q) > 0, however, both effects act in the same direction: If there are many reflected particles even more will get reflected. Alternatively, if only few particles are reflected more particles will be transmitted. This can lead to two solutions separated by an unstable equilibrium in between and thus the solution might not be unique. We will see this phenomenon in the discussion of the hard rods (see Section <ref>). § STATIONARY GHD EQUATION FOR T(P-Q) AND EFFECTIVE HAMILTONIAN We already established some general properties of the solutions in the last section. Now we will specifically study the case where the scattering phase shift depends only on the difference of momenta T(p,q) = T(p-q), where great simplifications happen since the normal modes are known. §.§ Stationary GHD equation in normal modes The stationary GHD equation (<ref>) is hard to solve because it is written in a continuity equation form ∂_x (An) = ∂_p (Bn). It would be more convenient to write the equation in transport form: Ã∂_x m = B̃∂_p m where m(x,p) are called the normal modes. In general, it is not clear how to derive the normal modes, or whether they exist. However, in the case of a scattering phase shift which only depends on the differences of the momenta T(p,q) = T(p-q) the normal modes exists and are simply given by n(x,p) <cit.>, similar to the GHD equation without external potential. This gives the stationary GHD equation in transport form: (∂_p E)dr∂_x n - (∂_x E)dr∂_p n = 0. For other models where T(p,q) ≠ T(p-q) this does not work. However, one could try to reparametrize p → f(p), which might lead to a reparameterized scattering phase shift depending only on the difference. Alternatively, if one is still able to find some other normal modes, the derivations done in this section will still be applicable with minor adaptions. Let us briefly recap why the occupation function n are the normal modes of the GHD equation in the case T(p,q) = T(p-q). The stationary GHD equation in conservation form is given by: 0 = ∂_x((∂_p E)dr n) - ∂_p((∂_x E)dr n) = [∂_x(∂_p E)dr - ∂_p(∂_x E)dr] n + (∂_p E)dr∂_xn - (∂_x E)dr∂_p n. Let us write out the derivatives of the dressing explicitly: ∂_x (∂_p E)dr = ∂_x∂_p E + T(∂_xn (∂_p E)dr + n ∂_x(∂_p E)dr) ∂_p (∂_x E)dr = ∂_p∂_x E + T(∂_pn (∂_x E)dr + n ∂_p(∂_x E)dr). Here we used T(p,q) = T(p-q), which implies [∂_p, T] = 0 to swap ∂_p and T. Using the definition of the dressing we can rewrite this as: ∂_x (∂_p E)dr = [∂_x∂_p E + T∂_xn (∂_p E)dr]dr ∂_p (∂_x E)dr = [∂_p∂_x E + T∂_pn (∂_x E)dr]dr. In particular, we find that: ∂_x (∂_p E)dr-∂_p (∂_x E)dr = [T(∂_xn (∂_p E)dr-∂_pn (∂_x E)dr)]dr. From this we can easily see that if (<ref>) holds (<ref>) and thus also (<ref>) will vanish. The advantage of (<ref>) over (<ref>) is that solutions to (<ref>) can be described in terms of characteristics. Consider a particle moving along a solution of the characteristic ODE: tx(t) = (∂_p E)dr(x(t),p(t)) tp(t) = (∂_x E)dr(x(t),p(t)). Then it is easy to see that n(x(t),p(t)) is constant in time, i.e. n is constant along characteristics. This property is particularly useful for numerical simulations (see <ref>). §.§ Effective Hamiltonian Writing the stationary GHD equation in transport form is already very useful. However, it becomes even more interesting if one notices an additional fact. As a byproduct of the derivation of the stationary GHD equation in normal modes we also showed that ∂_x (∂_p E)dr = ∂_p (∂_x E)dr, see equation (<ref>). This implies that there exists a function H(x,p) with the property: H(x,p) = ( E)dr where = [ ∂_x ∂_p ]. We call this function H(x,p) the effective Hamiltonian for the following reason: The stationary GHD equation can be written as ∂_p H ∂_x n = ∂_x H ∂_p n which is precisely the equation for a distribution of non-interacting particles evolving according to the Hamiltonian H. If there was another time derivative this equation would be the Lioville equation for that Hamiltonian. Since the time derivative is not there, this problem is now the scattering problem for non-interacting particles with Hamiltonian H(x,p). This idea is typical in integrable models and in GHD. One tries to rewrite the problem in terms of another effective problem for non-interacting particles where all quantities get dressed by the other particles (consider for instance the effective velocity veff(x) which is not the bare velocity of the particles, but gets altered due to the presence of other particles). In that respect we can view the Hamiltonian H(x,p) as the effective (or dressed) Hamiltonian under which the particles evolve. The effective Hamiltonian H(x,p) will be given by the bare Hamiltonian E(x,p) plus some other terms which originate from the interactions between particles. We want to note that H(x,p) is technically equal to EDr defined in <cit.>. However, there the authors only look at the p dependence of EDr(p), which works in general for any T(p,q). Since we are also interested in the x dependence, the situation is more complicated and only gives (<ref>) in the case T(p-q). Let us now derive a formula for the effective Hamiltonian. We will start with the definition of the dressing operation: H = E + T (n H). The curl of n H is given by: ∂_x (n∂_p H) - ∂_p(n∂_x H) = ∂_x n ∂_p H - ∂_p n∂_x H = 0 which vanishes due to the stationary GHD equation. Therefore, we can find a function N(x,p) with the property: N = n H and thus: H = E + T N. Let us take this equation in x and integrate it from -∞ to x: H(x,p) = H(-∞,p) + V(x,p) + T(N(x,p)-N(-∞,p)). By looking at x → -∞ we can fix H(-∞,p): H(-∞,p) = ∫_0^pp (∂_p E)dr(-∞,p) + C where we are free to choose the constant C. We can interpret the effective Hamiltonian (<ref>) as follows: The first part H(-∞,p) describes the evolution according to the GHD equation without impurity. The second term V(x,p) describes the bare contribution to the impurity. The third term: U(x,p) = T(N(x,p)-N(-∞,p)) describes the contribution to the impurity coming from the interactions between particles. Note that while V(x,p) will vanish for large x, U(x,p) will vanish by definition for x→-∞, but in general does not need to vanish for x →∞. In fact, this is important as U(x→∞,p) = 0 implies that the state on the right side of the impurity is unaffected by the impurity, i.e. it signals full transmission. As we will establish in the next section, N(x,p) will not vanish for large p →±∞ but rather approaches a constant. Therefore, if T(p) is an integrable function, i.e. ∫p |T(p)| < ∞ this allows us to remove the second term in U by redefining H(-∞,p) and U(x,p) →Ũ(x,p) = T N(x,p). This gives the following simplified expression for the Hamiltonian: H(x,p) = E(x,p) + T N(x,p) which agrees with (<ref>), up to the arbitrary constant. However, if T(p) is not integrable (for instance in the hard rods case T(p) = -d) then this definition of U(x,p) would be ill-defined (Ũ(x,p) = ∞) and therefore we will stick to (<ref>) for the following general discussion. Let us give two remarks about the general form of U(x,p): First, note that if T(p) is integrable and N is bounded, then U is actually bounded |U(x,p)| ≤ 2sup_p N(x,p) ∫q |T(q)|. Second, note that U(x,p) is not a general function, but has to be in the image of T. An important example where this is useful is the hard rods case where U(x) = T N(x,p) = -d∫qN(x,q) is a function of x only. The fact that U(x) depends only on x simplifies the problem so much that one can actually solve the impurity problem (see Section <ref>). We will finish this section with a simple, yet physically very important observation: At a mesoscopic impurity all particles with momentum p are either transmitted or reflected with probability one. This follows directly from the existence of the effective Hamiltonian H(x,p). The particles have to follow deterministic trajectories. Therefore all incoming particles with a specific momentum will follow the same trajectory and eventually leave the impurity at the same point. Note that this is substantially different from scattering at microscopic impurities, where particles are reflected or transmitted with a certain probability (for instance recall scattering at an rectangular potential barrier from your undergraduate quantum mechanics course). We conclude that this is a limitation of the mesoscopic impurity model and that non-deterministic scattering is an effect which appears only if the impurity is small enough. On the other hand, the amount of non-deterministic scattering can be used to quantify how well an impurity can be described by a mesoscopic impurity. §.§ The scattering problem in a general Hamiltonian In the previous section we introduced the effective Hamiltonian: H(x,p) = H(-∞,p) + V(x,p) + U(x,p) with: U(x,p) = T(N(x,p)-N(-∞,p)). So far this result not useful since we do not know N(x,p). For that we need to solve the scattering problem in the effective Hamiltonian H(x,p): We know that both n(x,p) and N(x,p) satisfy the following equation ∂_p H ∂_x N = ∂_x H ∂_p N. As we described earlier this equation implies N(x,p) is constant along characteristics: t x(t) = ∂_p H t p(t) = -∂_x H. However, we also know that a basic property of Hamiltonian systems is that the Hamiltonian is conserved along trajectories. This means a trajectory follows a level set of the Hamiltonian H(x(t),p(t)) = const. Now we know that N and H are both constant along a trajectory which implies we can locally write N(x,p) = N(H(x,p)). Locally here means that this is true for a domain Ω⊂ℝ^2 which does not contain any critical points H = 0 and does not contain more than one connected part of each level set. The reason why this is important is that globally the level sets of H(x,p) can have multiple disconnected components. If one starts on one of these components and follows the level set one never reaches the other disconnected components and thus there is no reason why N(x,p) should have the same value on these other components. Now let us assume we know all the incoming data nL(p) = n(-∞,p) and nR(p) = n(∞,p) on the left and the right side. As we discussed beforehand this typically requires the full knowledge about the state including the outgoing data as well. For simplicity we will also restrict to the case where pdr(±∞,p) is a monotonically increasing function. This has the advantage that we can be sure that there is only one p which corresponds to zero velocity and on the left side all particles with higher momenta are incoming and all particles with lower momenta are outgoing (and similar on the right side). If pdr(±∞,p) is not monotonically increasing then we might have several momenta corresponding to zero velocity which complicates the following reasoning. Nevertheless, by introducing more sets Σ, it will be possible to also include that case. Starting from the incoming data on both sides we can first compute the Hamiltonians HL/R(p) on both sides (which we know up to a constant) and then we can define the two functions: ÑL(h) = ∫_HL(pL)^hh nL(H^L^-1(h)) ÑR(h) = ∫_HR(pR)^hh nR(H^R^-1(h)) where pL/R are the momenta corresponding to zero velocity on both sides and H^L/R^-1(h) is the inverse function of HL/R(p) where we choose the incoming branch, i.e. H^L^-1(h) > pL and H^R^-1(h) < pR. We will now describe how to construct N(x,p) from the incoming data in a general Hamiltonian. This construction is also illustrated in Figure <ref> for the example Hamiltonian depicted in Figure <ref>. Given a general Hamiltonian we can group its level sets into three type of sets: First, we define ΣL/R which are all the level sets that originate from incoming data on either the right or the left side. We think of this as the regions which are allowed for particles to enter. For instance, ΣL consist of level sets which start from the x = -∞ on the left side with positive velocity and then either extend all the way to either the right side (i.e. the particle is transmitted) or it bends back and extends to the left side again (i.e. the particle is reflected). Then we have a collection of sets ΛL/R_i which are `islands' in either ΣL/R where the energy H(x,p) is too high (or to low) for particles to enter. ΛL/R_i consist of closed loops of level sets. Note that we associate to each of these ΛL/R_i an energy hL/R_i = H(∂ΛL/R_i) which corresponds to the energy of particles `flowing' around ΛL/R_i. The last type of sets are non-allowed regions similar to the ΛL/R_i which lie between ΣL and ΣR. Again they consist of closed loops of level sets. However, unlike the ΛL/R_i these islands will not give a contribution as we will define N(x,p) = 0 on them. The two sets ΣL/R are separated by a curve γ(s) ∈ℝ^2. This curve might not necessarily be unique, it is only required that ΣL lies on one side of the curve and ΣR on the other. We will parametrize this curve in such a way that γ_x(s →±∞) = ±∞. In many cases the curve γ(s) will be a function of x only in which case we write γ = (x,p_0(x)). Note that p_0(-∞) = γ_p(-∞) has the interpretation of the largest momentum incoming from the left side which is reflected by the impurity and the smallest momentum coming from the right side which is transmitted by the impurity (and similarly for p_0(∞) = γ_p(∞) on the right side). We will fix the integration constants in such a ways that both N(γ(s)) = 0 and the Hamiltonian H(γ(s)) = 0 on this curve. Once these sets have been characterized one can explicitly define N(x,p): N(x,p) = NL(H(x,p)) = ÑL(H(x,p)-H(-∞,p)+HL(p))-ÑL(HL(p_0(-∞))) (x,p) ∈ΣL NR(H(x,p)) = ÑR(H(x,p)-H(∞,p)+HR(p))-ÑR(HR(p_0(∞))) (x,p) ∈ΣR N(∂ΛL/R_i) = NL/R(hL/R_i) (x,p) ∈ΛL/R_i. The first two lines just copy the incoming data into the regions ΣL/R. For instance, on ΣL we know that N(x,p) = ÑL(H(x,p)+ C) + D, where C and D are constants we are allowed to choose. We set C = HL(p)-H(-∞,p) and D=ÑL(HL(p_0(-∞))), to ensure that N(γ(s)) = H(γ(s)) = 0. In the non-allowed regions ΛL/R_i we know that n(ΛL/R_i) = 0 and thus N(ΛL/R_i) = const. The constant is determined by its value on the boundary ∂ΛL/R_i, which corresponds to a level set of the Hamiltonian with energy hL/R_i. This complicated procedure allows us to compute N(x,p) given H(x,p). In turn we can now use this to compute U(x,p): Let us define ΣL/R(x) = p:(x,p)∈ΣL/R and ΛL/R_i(x) = p:(x,p)∈ΛL/R_i. Then we can rewrite definition (<ref>) as: U(x,p) = ∑_r∈L,R[ ∫_Σr(x)qT(p-q) Nr(H(x,q)) + ∑_i ∫_Λr_i(x)qT(p-q) Nr(hr_i) - ∫_Σr(-∞)q T(p-q) Nr(H(-∞,q))]. Note that if the T(p) is not integrable, then the individual integrals might not converge since N(p→∞) →const. In this case one has to combine the integrals into one integral over the sum of all three terms (which will then be finite since all constants at infinity cancel). We can also insert expression (<ref>) for H(x,p): U(x,p) = G[U](x,p) = ∑_r∈L,R[ ∫_Σr(x)qT(p-q) Nr(H(-∞,q) + V(x,q) + U(x,q)) + ∑_i ∫_Λr_i(x)qT(p-q) Nr(hr_i) - ∫_Σr(-∞)q T(p-q) Nr(H(-∞,q))]. This is a functional fixed point problem for U(x,p) with fixed point functional G[U]. Note that this fixed point problem is quite complicated, in particular since the shape of the domains ΣL/R and ΛL/R_i depend on U(x,p). In case we manage to find a solution to the fixed point equation we can write the solution to the stationary GHD equation (<ref>) as follows: n(x,p) = nL(H^L^-1(H(x,p)-H(-∞,p)+HL(p))) (x,p) ∈ΣL nR(H^R^-1(H(x,p)-H(-∞,p)+HR(p))) (x,p) ∈ΣR 0 (x,p) ∈ΛL/R_i. Remark: The fixed point problem also has contributions from the regions ΛL/R_i, even though there n(x,p) = 0. This is because the defining relation N = n H = 0 does not not imply that N(x,p) = 0, but only that N(x,p) = const in ΛL/R_i. This is mathematically similar to the `Aharonov–Bohm effect' in quantum mechanics <cit.>. §.§ Discussion of the fixed point equation The fixed point equation (<ref>) is quite complicated and can in general only be solved numerically. However, even without knowing the solution explicitly one can still establish some basic properties. §.§.§ Existence of solution It is physically clear that there should be some solution to the scattering problem given the incoming data. However, as we discussed earlier it is not clear how to specify the incoming data in interacting models and therefore it is also not clear whether a solution will always exits. In the following we will give a mathematical argument why we expect that the fixed point equation should always have solution for given incoming data NL/R. Let us rewrite this as a functional equation: F[U](x,p) = U(x,p)- G[U](x,p) = 0. First, observe that this functional changes continuously under smooth deformations of U(x,p). This is obvious as long as changing U does not create or destroy domains ΛL/R_i or change their energies hL/R_i. Then it only shifts around the boundaries in a continuous fashion which gives a continuous change of the functional F[U]. Also in the other cases F is continuous. For instance, when the change in U creates a new domain ΛL/R_i we tear a `hole' into ΣL/R and then fill the interior with a constant value of N s.t. the function N is still continuous. This operation also only changes N(x,p) continuously and thus F[U] is continuous. Now consider a bump function Δ U(x,p) with compact support and let us study how ∫xpF[U+λΔ U](x,p)Δ U(x,p) behaves as λ→±∞. In both cases it is easy to see that ∫xpF[U+λΔ U](x,p)Δ U(x,p) →±∞. This is particularly obvious if T(p) is integrable as the interaction induced term is then bounded. But also when T(p) is not integrable the interaction induced term will still approach a finite value as λ→±∞. No matter the sign of λ as |λ| →∞ we have that |U(x,p) + λΔ U(x,p)| will be very large around the support of Δ U and thus we have an island ΛL/R_i formed around the support of Δ U. The value of N(x,p) on this island is constant and will approach N(∂supp(Δ U)) which is independent of λ for large |λ|. Thus, as λ→±∞, N(x,p) approaches a finite function and therefore the interaction induced term is bounded in λ, which implies (<ref>). We have now shown that for any Δ U we have ∫xpF[U+λΔ U](x,p)Δ U(x,p) →±∞. Let us take the formal limit where Δ U approaches a delta function which gives: F[U+λδ(·-x)δ(·-p)](x,p) →±∞. Of course the Hamiltonian H(x,p)+λδ(·-x)δ(·-p) does not make proper sense which is why we choose to regularize them by considering bump functions. We now view U(x,p) as a vector U and δ(·-x)δ(·-p) as a basis of this functional vector space. A finite-dimensional version of statement (<ref>) is: F_k(U + λe_k) →±∞ for all k as λ→∞. If (<ref>) holds in one dimension and F is continuous then the intermediate value theorem shows that there must be a zero of F(U) on the real line. In the finite-dimensional case there is a generalization of the intermediate value theorem, called Poincaré–Miranda theorem, which (up to some mathematical technicalities) establishes that (<ref>) implies the existence of a zero of F[U]. This indicates that (<ref>) should imply the existence of a solution F[U] = 0 as well (however, this is not rigorous as results in finite dimension possibly do not carry over to the infinite-dimensional setting). §.§.§ Discussion of uniqueness While the existence of solutions is physically expected, there is no reason why solutions should be unique. Non-uniqueness of solutions means that the solution will depend on the scattering history. Therefore studying the uniqueness of solutions gives physical insights. For that it is constructive to compute the Jacobian of F[U], i.e. how F is perturbed by small perturbations of U(x,p) → U(x,p) + δ U(x,p): Let us again consider a δ U which is a bump function. The important observation here is that as long as δ U does not affect any critical points of H(x,p) (i.e. it does not move them around or create/destroy them) the boundaries of the sets ΣL/R and ΛL/R_i might change in a continuous fashion, but the topology of the sets does not change. In particular the energy levels hL/R_i associated to ΛL/R_i remain unchanged. For such a perturbation we therefore find: (JF[U])δ U = αF[U+αδ U]_α=0 = δ U(x,p) -∑_r∈L,R[∫_Σr(x)qT(p-q) nr(H(-∞,q) + V(x,q) + U(x,q))δ U(x,q)] = (1-Tn) δ U where we used (<ref>) to identify the solution n(x,p). Again we let δ U(x,p) →δ(x-x_0)δ U(p) approach a delta function in space which gives: (JF[U])(δ(x-x_0)δ U) = δ(x-x_0)(1-Tn(x_0))δ U where n(x_0,p) is the density at x_0. From this we can see that perturbations at x_0 only affect the result at x_0 but not at any other y ≠ x_0. This means that the fixed point equation decouples for different x (as long as there are no critical points at x). Uniqueness of the fixed point equation is now linked to whether the Jacobian 1-Tn(x_0) is invertible. We can identify this operator as the inverse dressing operator (<ref>), which we know should be invertible if n represents a physical state. Therefore, as long as we stay inside the set of physical states if we find a solution at x_0 it will be a unique one. There can be multiple solutions but they either need to be separated by non-physical states or by changes of the critical points. In particular quantum mechanical models (like the Lieb-Liniger model) where T(p) is integrable ∫p T(p) = 1 and in addition the occupation function is bounded n(x,p) < 1 (i.e. every quantum number can be occupied at most once) the Jacobian will always be invertible (in that case the operator norm Tn_∞ < 1 and it is a standard result that 1-Tn is invertible). The above arguments only partially answer the question of uniqueness since there we excluded the case when δ U affects critical points of the Hamiltonian. It only tells us that the Jacobian is invertible on that subspace. We also need to discuss what happens in case δ U affects a critical point. This is hard to analyze in general since changing U at a critical point will affect the solution globally. For instance, when δ U acts on a critical point on the boundary of a ΛL/R_i it will change the energy hL/R_i associated to ΛL/R_i, which (since the boundary is a level set of H(x,p) with that energy hL/R_i) will move the whole boundary ∂ΛL/R_i. However, the set of critical points of the Hamiltonian will usually be finite and determine the topology of the problem. If we know the positions of the critical points, the values p_0(±∞) = γ_p(±∞) and the energies hL/R_i associated to the islands we can construct the sets ΣL/R and ΛL/R_i. In a model where we can ensure that the dressing is always well defined, this finite amount of information is enough to construct a full unique solution U(x,p) from it. In practice this allows us to simplify the functional fixed point problem into a finite-dimensional fixed point problem, which is much easier to analyze. We will use this strategy later in the case of the hard rods model to show that for positive rod length the solution is unique, while for negative rod length we give a specific example where the solution is not unique, see Section <ref>. Remark: The discussion of uniqueness here considers only the uniqueness of the fixed point problem. The construction of the fixed point problem requires the knowledge about which particles are incoming. As we already discussed one can only determine this if one knows the outgoing particles as well. Thus, even if the fixed point problem always has a unique solution there could be multiple configurations of incoming and outgoing particles. §.§.§ Solution for weak potentials Although the fixed point equation cannot be solved explicitly for a general model it is a non-perturbative method. A different, perhaps more standard, way of approaching the impurity problem would be to treat the impurity via perturbation theory in impurity strength λ. In general, the problem with perturbation theory is that it is often not clear whether the perturbative expansion indeed converges to the actual solution, or whether it misses non-perturbative effects. In this section we establish that for small impurity strength λ the scattering will vanish identically. This implies that a perturbative expansion in λ would yield 0 to all orders. We conclude that scattering at mesoscopic impurities is fully non-perturbative in nature. observationobsweak Consider a state n(p) s.t. n(p) is identically zero in a finite region around the zero velocity momentum, i.e. the density is only non-zero for |∂_p E_0| > C. Consider any (bounded) potential V(x,p) and scale it λ V(x,p) with λ→ 0. Then there exists a λ_0>0 s.t. for all |λ| < λ_0 there exists a solution to (<ref>) where the states on the left and right side coincide. This means that all particles are unaffected by the impurity and no particles are reflected. Note that this is clear for an impurity for free particles: For instance, consider the impurity V(x,p) = V̅e^-x^22. Then all particles with momentum p > √(2λV̅) will be transmitted, while all particles with momentum p<√(2λV̅) will be reflected. If all particles have a momentum bounded from below |p|>p_0, then for λ < λ_0 = p_0^2/2V̅ all particles are transmitted. In interacting models the idea is similar: The interaction changes the Hamiltonian, but if λ is small enough this change is negligible and thus the free particle result applies as well. While this establishes that particles are not reflected, it does not explain why the transmitted particles are also unaffected. For instance, the momenta of the outgoing particles could be shifted. This can only be checked by explicitly constructing the solution, which we do in <ref>. § THE HARD ROD CASE We would like to show how how the ideas from the last section can be applied to a specific model: the hard rods model. In this model we have φ(p-q) = -d and E_0(p) = p^22. The physical hard rods model is given by positive rod length d>0, but we will also allow d<0. In negative hard rods the d does not describe the rods size, but should rather be interpreted as the time delay for the scattering of two particles, which leads to an effective negative position shift -d. Before we start let us note a speciality of the hard rods model. One can explicitly evaluate the dressing (<ref>): fdr(p) = f(p)-d∫q f(q) ρ(q) = f(p)-d2π∫qn(q)f(q)1+d2π∫qn(q) and the effective velocity: veff(p) = p-d∫qqρ(q)/1-d∫qρ(q) = (1+d2π∫qn(q))p - d2π∫q q n(q) which in particular means that the momentum that corresponds to zero velocity is dp := d∫p p ρ(x,p). Using pdr = p-dp = ∂_p H(±∞,p) we can explicitly compute the effective Hamiltonian outside the impurity: H(±∞,p) = (p-dp)^2/2 + C_± where C_± are the integration constants on both sides. Note that dp has the same value on both sides since the total current satisfies ∫p veff(p) ρ(p) = ∫p p ρ(p) = p and the total current is independent of x (see equation (<ref>)). Due to T(p-q) = -d2π we find that U(x,p) = U(x) of (<ref>) is a function of x only. Compared to the general case, where U(x,p) can depend on both x and p this already simplifies the problem. We will now argue that one can simplify the fixed point equation even further and finally arrive at a finite-dimensional fixed point problem: First, let us recall that U(x) satisfies the following fixed point equation: U(x) = G_x(U(x)) = -d2π∑_r∈L,R[ ∫_Σr(x)qNr(H(-∞,q) + V(x,q) + U(x)) + ∑_i ∫_Λr_i(x)q Nr(hr_i) - ∫_Σr(-∞)q Nr(H(-∞,q))]. As we discussed before (see Section <ref>), unless at x there is a critical point of H(x,p), then a change of U(x) will only affect the fixed point equation at x, but not at any other position. Therefore fixing one of those x, where is no critical point of H(x,p) we can rewrite the fixed point equation as: F_x(U) = U + G_x(U) = 0 which is a function of one variable only. Furthermore by taking the derivative of F_x we find for d > 0: F_x(U)U = 1 + d2π∑_r∈L,R[ ∫_Σr(x)qnr(H(-∞,q) + V(x,q) + U)] ≥ 1 and therefore F_x(U) is monotonically increasing. Here nL/R(h) = NL/R(h)h, which we identify with the solution (<ref>), i.e. n(x,p) = nL/R(h(x,p)) on ΣL/R. This is implies uniqueness of a solution U = U(x). Given the location of the critical points we can therefore compute U(x) at all other positions x. The only missing pieces of information are locations of the critical points, dp and the integration constants C_± from (<ref>). Note that this is a finite set of information. Therefore, it should be possible to reduce the fixed point equation to a finite-dimensional problem. This is an abstract statement, but we will make the ideas more explicit at an example. Also note that the above discussion was for d>0, for negative d there is no mathematical reason why (<ref>) should be positive. Therefore, there could be potentially more than one solution. §.§ Explicit example: Potential does not depend on p and incoming particles from the left In the following we would like to explain this procedure on the most simple example: An impurity V(x) which only depends on position x. For simplicity let us also restrict to V(x) ≥ 0 which has one unique maximum V̅ at x = 0 and V'(x) > 0 for x<0 and V'(x)<0 for x<0. Note that due to the local rescaling invariance all of these impurities will give rise to the same scattering (see Section <ref>). To make the analysis even more simpler let us now look at the situation where there are no incoming particles from the right, i.e. nR(p) = 0. This example can easily be extended to the case where particles also come in from the right, but then the discussion of uniqueness becomes much more complicated. Since there are only particles coming in from the left it is convenient to redefine the zero value of the Hamiltonian to be at the zero-velocity momentum dp at x = -∞. This means that the Hamiltonian is given by: H(x,p) = 12(p-dp)^2 + V(x) + U(x) = 12(p-dp)^2 + W(x) where we combined W(x) = V(x) + U(x). To understand the simplicity of the Hamiltonian (<ref>) is, the reader might find it helpful to define v = p-dp and write the Hamiltonian as: H(x,v) = 12v^2 + W(x). This is just the Hamiltonian of a classical particle moving in the effective potential W(x). Scattering at such a potential is very simple. Let us denote by W̅ = max_x W(x) the highest value of the effective potential. As we will observe later this maximum is at x=0 which is the same position as the maximum of V(x). Then all incoming particles with kinetic energy smaller than W̅ will get reflected and all other particles will get transmitted. This can be expressed in the following way (note that we slightly redefines N(x,p) due to the redefinition of H(x,p)): N(x,p) = NL(H(x,p)) p > p_0(x) NL(W̅) p < p_0(x). Here NL and the boundary p_0(x) are explicitly given by: NL(h) = ∫_dp^dp+√(2h)p (p-dp) nL(p) = ∫_0^√(2h)v v nL(dp+v) p_0(x) = dp +x√(2(W̅-W(x))). Note that in the region p<p_0(x) there are no particles, but still N(x,p) is non-zero since N(x,p) has to be a continuous function. From the fixed point equation for U(x) we can write down a equation for W(x): F_x(W(x)) = W(x) + d2π[ ∫_p_0(x)^∞p NL(12(p-dp)^2 + W(x)) - ∫_p_0(-∞)^∞p NL(12(p-dp)^2) + (p_0(x)-p_0(-∞))NL(W̅)] = V(x). The last term comes from the -N(-∞,q) term in (<ref>). Note that since N(x,p) is a non-zero constant for p<p_0(x) both integrals in (<ref>) are formally infinite, however the divergent part in both integrals cancels. Equation (<ref>) is particularly useful since V(x) only appears on the right hand side. Let us take the derivative of F_x(W): F_x(W)W = 1 + d2π∫p nL(12(p-dp)^2 + W(x)) where we defined nL(h) = NL(h)h = nL(dp+√(2h)). For d>0 this derivative is always positive, for d<0 it might become negative as well. However, note that for a physical state F_x(W)W coincides with 11dr(x), which has to be positive. From equation (<ref>) we can compute the derivative of W(x) w.r.t. x: W(x)x = 1/F_x(W)WV(x)x = 1dr(x)V'(x) = V'(x)/1+d2π∫pn(x,p). This equation has a nice interpretation: The slope of the effective potential (i.e. the force) is the slope of the bare potential, but modified due to the interaction with other particles. In particular for d>0 the slope is decreased (the potential barrier is lowered, thus more particles will be transmitted), while for d<0 the slope is increased (the potential barrier is higher, thus less particles will be transmitted). This is in line with the intuitive picture we had discussed before in Section <ref>. From equation (<ref>) we can also infer that the highest point of the effective potential W(x) is at the highest point x=0 of V(x) as well. Let us evaluate (<ref>) at x=0: F_0(W̅) = W̅ + d2π[ ∫_dp^∞p NL(12(p-dp)^2 + W̅) - ∫_dp-√(2W̅)^∞p NL(12(p-dp)^2)+ √(2W̅)NL(W̅)] = V̅. Since F_0 is continuous and F_0(0) = 0 and F_0(W̅→∞) →∞ we know that there will exist at least one solution W̅>0 for V̅>0. Again let us compute the derivative: F_0(W̅)W̅ = 1 + d2π[ ∫_dp^∞p nL(12(p-dp)^2 + W̅) + √(2W̅)nL(W̅)]. This is always positive for d>0 and thus a unique solution exists. For d<0 this is not clear: While ∫p n(x,p)<2π|d| is bounded for physical reasons (1dr is has to be positive) there is no bound on the second term nL(W̅) = nL(dp+√(2W̅)). It is not hard to construct a situation where there are multiple solutions W̅ for some value V̅. Recall, however, that this solution is only a solution to the fixed point equation, which is not the full solution to the scattering problem (see the discussion at the end of Section <ref>). In fact, even if there are multiple solutions to (<ref>) these solutions will be physically different since they give rise to different dp. §.§ Determining dp So far we have discussed how we can find a solution to the impurity problem given the effective Hamiltonian at x→ -∞, in particular on dp. In fact, we do not know dp a priori, but instead we need to compute it in a self consistent fashion: We need to match the assumed dp with the dp = 12π1drd∫qq n(x→-∞,q) computed from the solution. First note that any solution to the fixed point equation (<ref>) can always be upgraded to a full solution by a shift of the momentum variable: In fact, if we define v = p-dp and parametrize the initial condition in terms of nv(v) = n(dp+v) the parameter dp completely drops out of the equation. Then if we take any solution nv(x,p) of the new equation, compute its p = ∫v nv(v) and reintroduce the momentum p = dp+v we always find a solution n(x,p) stationary GHD equation. Now let us consider a scattering scenario where we know the distribtion of incoming particles as a function p. At this point we find that the problem is ill-defined. In order to specify which particles are incoming we need to know the momentum corresponding to zero velocity dp. Fortunately, in case of only incoming particles from the left there is a situation where we can uniquely specify the initial data. For d>0 the trick is to observe that the more particles are reflected the smaller dp becomes. Thus dp is largest for full transmission. Therefore, if we consider a state n(p) which is identically zero for all momenta smaller than some cutoff p_0 and this cutoff is larger than p_0 > dp = d∫ppn(p)1+d∫pn(p) then no matter the amount of reflected particles all incoming particles will always have positive velocity. Note that for d<0 we can always choose p_0=0 since dp≤ 0. In the present case of an impurity which does not depend on x and incoming particles only from the left, one can derive the following compact expression for the zero velocity momentum: dp = d[NL(∞)-NL(W̅)]. Equation (<ref>) together with (<ref>) are a two-dimensional closed system of equations: By explicit computation one can check that the determinant of the Jacobian of this system is always positive if d>0. We conclude that for d>0 there always exists a unique solution to the scattering problem. §.§ Hysteresis for negative hard rods d < 0 Contrary to hard rods with positive size in the case of negative rod length there can indeed exist multiple solutions to the two-dimensional system of equations. We would like to demonstrate that using an explicit example. We study the scattering of nL(p) = θ(p-12)12e^-50(p-1)^2 at the potential V(x) = V̅e^-x^22. We plot the values W̅ of all solutions as function of V̅ in Figure <ref> (left) and compare it to a numerical simulation. In the range V̅≈ [0.36,0.41] there are three different solutions, only the outer two are stable. The numerical simulation was done using the algorithm described in <ref>. In addition we adiabatically ramp V̅ up and later down: after each iteration we increase/decrease V̅ (ΔV̅ = 0.001 per step). For increasing V̅ the simulation follows the lower branch, while for decreasing V̅ the simulation follows the upper branch. On the right side of Figure <ref> we also show the trajectories of particles and the shape of the effective potential. For small V̅ only few particles are reflected, but as 1dr(x) = 11+d∫qn(x,q) > 1 the effective potential is already larger than the bare one by (<ref>). When we increase V̅ over the threshold value V̅≈ 4.1 the situation suddenly changes: Now a considerable fraction of the particles are reflected, which increases 1dr(x) on the left side, which in turn increases W(x). Therefore even fewer particles can penetrate the impurity. This feedback loop only stops when most of the particles are reflected and only a negligible fraction of the particles is transmitted. If we now decrease V̅ this situation is stable: The reflected particles contribute to a large 1dr(x) and thus to a high potential W(x), which most particles cannot penetrate. Only when we decrease V̅ below the threshold value V̅≈ 3.6 too many particles are transmitted and the system jumps back to the original configuration with few reflected particles. This explains the hysteresis observed in the numerical simulation. § SIMULATION OF THE GHD EQUATION WITH IMPURITY BOUNDARY CONDITION We would like to finish this paper with a demonstration that the GHD equation including the boundary conditions obtained from the mesoscopic impurity indeed gives the correct prediction for the evolution of the quasi-particle density in the large-scale limit L →∞. We study the evolution of hard rods starting from an initial state characterized by (in macroscopic coordinates) n(0,xmacro,p) = θ(-3<xmacro<-0.5)θ(0.5<p<2) 7.5 e^-12(xmacro+1.5)^2-252(p-1)^2 both via a simulation of the GHD equation and via direct simulation of the microscopic hard rods model. The impurity in this model is given by a potential V(xmicro) = λ e^-(xmicro/Limp)^22 (in microscopic coordinates), where Limp = 110L^34 is scaled with the macroscopic scale L. The relation between the microscopic and macroscopic coordinates is as follows: xmicro = L xmacro and tmicro = L tmacro. We choose the height of the impurity to be λ = 0.4, the hard rod length to be d=0.3 and the simulation time to be Tmacro=3, after which most of the particles have scattered. The GHD simulation with impurity boundary condition are performed using the hybrid algorithm described in detail in <ref>. Space, momentum and time are discretized with Δ xmacro = 0.006, Δ p = 0.004 and Δ tmacro = 0.001 (for further details refer to <ref>). At these values the GHD simulation has well converged. For the hard rods simulation for a given L we first distribute the hard rods randomly according to their initial distribution ρ(0,xmacro,p) = 12π1dr(xmacro)n(0,xmacro,p) and then evolve them using simple molecular dynamics with Δ tmicro = 0.01. Since we place the particles initially random, the resulting final state will also be random. Therefore, in order to gain sufficient statistics, this simulation is repeated 100 times for each L and averaged. The results of both simulations are compared in Figure <ref>, where we plot a) the resulting ρ(tmacro=3,xmacro,p) for both simulations as a density plot and b) the total density of particles in two regions R (reflected particles) and T (transmitted particles. Already in the density plot a) one can see that the quasi-particle density obtained via GHD with impurity boundary condition agrees quite well with the averaged density from the hard rods simulation. The results can also be compared more quantitatively in b). The data points and their errorbars are the mean and standard deviation of the hard rods Monte Carlo results for several L. We can see that this agrees well with the GHD simulation (solid line) in both regions. We conclude that the GHD equation with impurity boundary conditions indeed correctly describes the large scale evolution of an integrable model with a mesoscopic impurity. Further numerical results are given in <ref>: We simulate the hard rods GHD equation starting from the same initial state and the same impurity, but for different hard rod lengths d=-0.3,0, 0.3 and qualitatively discuss the difference of how they scatter at the impurity. § CONCLUSION In this paper we studied mesoscopic impurities in the GHD framework. First, we discussed how to include impurities into GHD in general and found that they are described by boundary conditions. The boundary conditions correspond to non-equilibrium stationary states of the microscopic impurity model. A big complication in interacting models compared to non-interacting models is that reflected particles affect the incoming particles (in particular their effective velocities) and therefore one cannot specify the incoming particles without knowing the outgoing particles. This means the solution to the scattering problem is only given by a collection of possible solutions, but it is not directly given as a map from the incoming data to the outgoing data. Since it is not known how to find non-equilibrium stationary states in general, we decided to study a specific class of impurities: Impurities on a mesoscopic scale (larger than the diffusive scale), which are linear combinations of charge densities. These can be described by the (Euler-scale) stationary GHD equation in an external potential. This provides a broad class of impurities, which are present in all integrable models and can be studied analytically also for strong impurities. Additional simplification occurs when we restrict to models where the scattering phase shift is only a function of the difference of the momenta. Here the scattering problem can be interpreted as particles moving in a one-particle effective Hamiltonian H(x,p), given by the bare energy E(x,p) plus an interaction dependent correction. Given the incoming data we described how to solve the scattering problem at such a Hamiltonian H(x,p) in general. This allows us to write down a functional fixed point equation for the correction term in H(x,p). If the incoming data is known then a solution to the fixed point equation corresponds to a valid solution to the stationary GHD equation. However, as the outgoing data might affect the incoming data, in general a solution to the fixed point equation will not reduce to the assumed incoming state for x →±∞. Instead, one would have to adapt the asymptotic data until it is matched by the incoming data of the solution to the fixed point problem. Still, the fixed point equation provides a starting point to study mesoscopic impurities in general and even to find analytic solutions to the scattering problem. In general, mesoscopic impurities show the following features: They are invariand under local spatial rescaling in x, the scattering at them is deterministic and scattering vanishes for sufficiently weak impurities. Furthermore, it is possible to solve the scattering problem for mesoscopic impurities via an efficient algorithm. We provided explicit solutions to an impurity in the hard rods model, given by a potential, which is independent of p. Here the difference between a positive (corresponding to a time delay) and a negative (corresponding to an instant jump) scattering phase shift φ(p) becomes apparent: Negative φ(p) tends to increase the transmission, while positive φ(p) decreases it. For the hard rods we also compared molecular simulations and GHD simulations with impurity and found agreement. This shows that the scattering can be indeed captured by a boundary condition. Mesoscopic impurities are of course only an approximation for large impurities, in particular since we only work on the Euler scale. It would be interesting to go beyond that. In regard of the gradient expansion we can view this as a first order approximation in the impurity size 1/Limp. By adding more terms of the gradient expansion one can compute higher order corrections to our results which would allow the treatment of smaller mesoscopic impurities (for instance on the diffusive scale). However, it is not clear whether this expansion extends all the way to microscopic impurities, i.e. whether this expansion converges or is only an asymptotic expansion. To study this we expect that it will be interesting to look at the transition between deterministic and non-deterministic scattering: We have established in this paper that scattering at (Euler scale) mesoscopic impurities is always deterministic: particles are always transmitted or reflected with probability one. This is of course not generally true for microscopic impurities. It would be interesting to see whether higher order gradient expansion introduces non-deterministic scattering, otherwise microscopic impurities will be fundamentally different from mesoscopic ones. But also on the level of mesoscopic impurities there are open questions left. We discussed that solutions to the scattering problem are not unique in general, but it is not clear whether this appears in all models and under which conditions. In particular, it would be interesting to have a more detailed look at the scattering problem in quantum mechanical models, for instance the Lieb-Liniger model. There the interaction between particles is momentum dependent and also φ(p) is integrable which is quite different from the hard rods model. It would also be interesting to go beyond these simple integrable models and also consider models where φ(p,q) ≠φ(p-q) is not a function of the difference of momenta only (for instance models with multiple particle species). In this case, the normal modes of the GHD equation with external potential are not known, so most of the analysis in this paper cannot be applied. It is not even clear whether normal modes will always exist in any model. If there are models where they do not exist, scattering could be quite different. For instance, since one cannot think about the system as particles moving along GHD characteristics anymore, scattering might be non-deterministic. Also the numerical algorithms outlined in <ref> do not apply in that case and it would be interesting to extend them. A completely different direction of research would be to implement mesoscopic impurities in actual experiments. We believe that this should be possible among others in cold atom experiments, which are well described by the Lieb-Liniger model. For instance, a potential barrier could be implemented similar to an external potential, which is already used in GHD experiments <cit.>. A precise setup we have in mind would be to prepare the atoms in a 1D trap (similar to <cit.>). Then at time t=0 the trap is removed and at the same time a sharply peaked potential is turned on which serves as the impurity. After releasing the atoms, the distribution of particles can be observed as function of time and then compared to theoretical computations. A feature that would be particularly interesting for experimental realization is the local rescaling property of mesoscopic impurities. This implies some kind of stability against perturbations: Only few specifications of the impurity determine the scattering behavior, the precise shape of the impurity is not important. I would like to thank Benjamin Doyon for reading the manuscript, his helpful comments and discussing the topic with me. Funding from the faculty of Natural, Mathematical & Engineering Sciences at King's College London is acknowledged. Numerical simulations were done using the CREATE cluster <cit.>. § NUMERICAL SIMULATION §.§ Scattering problem of GHD The aim of this appendix is to establish an efficient algorithm which allows to solve the scattering problem at a potential in GHD at a given potential. By that we mean solving (∂_p E)dr∂_x n = (∂_x E)dr∂_p n given some incoming state on the left and on the right. Again as we discussed in Section <ref> one cannot specify which particles are incoming, as it depends on the reflected/transmitted particles. However, for numerical purposes this is not too relevant as particles with the wrong velocity will simply move away from the potential and thus will not affect the scattering at the potential. §.§.§ A naive algorithm Standard algorithms to solve equations of motion are based on finite time step schemes: One takes the initial state and evolves it by a small time Δ t. This gives another state and by iterating this procedure T/Δ t times one finally reaches time T. These algorithms usually are exact as Δ t → 0 and thus this procedure allows to find a good numerical approximation to the actual solution if Δ t is small enough. For the scattering problem this is not possible simply because time does not appear in (<ref>). In case there is no reflection and only incoming particles from the left, one could use x as `time'-variable. By that we mean start at xinit≪ 0 and then propagate n(x,p) from x to x+Δ x until one reaches a large xfinal≫ 0. Unfortunately, in case there is reflection we do not know how much is coming back, thus an algorithm of that kind is not possible. What is possible, however, is to simulate the time-dependent GHD equation: Recall the derivation of (<ref>), see Section <ref>, and reinstate time as in equation (<ref>). One interpretation of equation (<ref>) is that the limit ℓ→∞ corresponds to a long time limit. Thus, if we simulate the ordinary GHD equation for a long time, the state will approach a stationary state, which is a solution to the stationary GHD equation. Therefore, a naive algorithm to numerically obtain the stationary state is to use a finite time step scheme to compute the solution at a long time T and then send T →∞ until convergence is reached. An algorithm of this kind of course theoretically works, but in practice it is very inefficient. The problem is that due to the long time T, errors from each step will add up. This means in order to keep precision one has to choose Δ t smaller every time one increases T, which leads to an even larger number of steps. Furthermore let us note that simulating a single step of the GHD equation is relatively time consuming since one has to compute the dressing of ∂_p E and ∂_x E, which involves solving an infinite-dimensional (or very high dimensional after discretization) linear equation at each point in space. §.§.§ Direct simulation of the stationary GHD equation The above algorithm is inefficient because it does not make any use of the special properties of the stationary state. Now we will describe a more efficient algorithm which is based on the idea that the stationary state can be described by particles evolving according to some Hamiltonian, defined via (<ref>). Given (∂_p E)dr and (∂_x E)dr the solution n(x,p) can easily be found by computing the characteristics (<ref>). Given the characteristics we can compute (∂_p E)dr and (∂_x E)dr. Therefore, an iterative algorithm to solve the stationary GHD equation given the distribution of incoming particles is as follows: * Guess an initial n(x,p), for instance n(x,p) = 0 * Use n(x,p) to compute (∂_p E)dr and (∂_x E)dr * Choose an x_0 s.t. x_0 is outside the impurity. For a set of initial momenta p_0 compute the characteristics starting at x(0)=± x_0 and p(0)=p_0. * From the characteristics compute a new n(x,p) by transporting the incoming data along the characteristics n(x(t),p(t)) = n(x(0),p(0)). Here n(x(0),p(0)) is the incoming data. * Repeat from step 2 using the new n(x,p). Iterate this procedure until convergence. The above algorithm is quite efficient: Each iteration still consists of two time consuming steps, computing the characteristics and computing (∂_p E)dr and (∂_x E)dr. However, in practice we find that the algorithm usually converges relatively fast (within ∼ 10 iterations) and thus this is a huge speed up compared to the simulation of the time-dependent GHD simulation. Another advantage of this algorithm is that it gives access to the characteristics and in particular directly shows which quasi-particles are reflected and which are transmitted. Also from (∂_p E)dr and (∂_x E)dr one can reconstruct the effective Hamiltonian (<ref>). Remark: The precision of the algorithm is of course greatly influenced by the choice of the set of initial momenta p_0 for the characteristics. Note that not only the total number of characteristics are important, but also their distribution. In fact, it is important to make sure that there are sufficient characteristics in regions where the incoming densities n(± x_0,p_0) are high. High densities have the biggest impact on the precision of the dressing, which in turn is important to be able to compute accurate characteristics. On the other hand it is also important to have at least some characteristics in low density regions. We found that the following way of distributing produces good results: First, we sample the majority of the p_0 from the probability measures which are proportional to the incoming densities. This naturally places many characteristics in high density regions, but places only few characteristics in the low density regions. In order to ensure sufficient coverage of those regions as well we place the remaining characteristics with a uniform spacing between them, i.e. p_0 = k Δ p, where k ∈ℕ. In our simulations we choose a small time-step Δ t = 0.001 to compute the characteristics. The dressed (∂_p E)dr and (∂_x E)dr we compute by partitioning the space into small boxes of size Δ x = 10 Δ t. Therefore, each characteristic will provide ∼ 10 points (x(t),p(t)) in each box. We collect all those points from all characteristics and sort them according to their momenta. At these points (x(t),p(t)) we now know the value of n(x(t),p(t)). In order to find an approximation for the full n(x,p) we (linearly) interpolate between those base points. §.§ Simulating the GHD equation with mesoscopic impurity So far we studied how to solve the scattering problem in GHD. As we discussed this produces the boundary conditions for the GHD equation. In this section we now want to include these boundary conditions into a simulation of the GHD equation. That is, given some initial macroscopic distribution n_0(x,p), how does n(t,x,p) evolve in time in the presence of the impurity. The equation we want to solve is given by: ∂_t n = -veff∂_x n x ≠ 0 and the corresponding boundary conditions at x=0. Outside the impurity one can simulate the GHD equation using ones preferred algorithm (we simply discretize space, momenta and time and evolve using the right hand side of the GHD equation (<ref>)). In between each step one has to solve the impurity problem using the current incoming data on the left and on the right side of the impurity. There are two options: One way is to make use of the fixed point problem we introduced in this paper and analytically solve for the solution. If possible this method can be very fast. For instance, for the hard rods with d>0 we already showed that there always exists a unique solution. But in a general model the analytical solution of the scattering problem becomes way more complicated. Also it is not clear whether the solution will be unique and thus, in case of multiple solutions, it is not clear which solution to choose. Instead, one can solve the impurity problem using the iterative scheme described in the last section. This algorithm has the advantage that it can be implemented for any model. In addition, using the following simple trick the algorithm will automatically deal with the possibility of multiple solutions: Instead of solving the impurity problem from scratch during each time step, one can start the iterative procedure from the solution obtained in the last time step. This way the state at the impurity will adiabatically follow the correct solution to the impurity problem over time. Furthermore, we find that it is sufficient to perform one iteration of the iteration scheme in each time-step. This gives an efficient hybrid algorithm which solves the GHD equation with impurity. §.§ Comparison of the hard rods scattering for different d We now use the hybrid algorithm described in the last section to simulate an explicit example: hard rods scattering at a mesoscopic potential V(x) = λ e^-12x^2 for different rod lengths d=-0.3,0,0.3. For d=0.3 this is the same simulation as in Section <ref>. For the part outside the impurity we use a basic algorithm: We discretize space and momentum on a 1000 × 1000 grid with spacings Δ xmacro = 0.006, Δ p = 0.004 and store n_ij = n(x_i,p_j). We also discretize time with Δ tmacro = 0.001. In each step we numerically compute the effective velocity veff(x,p) using (<ref>) and the derivative ∂_x n and update n(t+Δ,x,p) = n(t,x,p) - veff(t,x,p)∂_xn(t,x,p), except for the impurity site. At the impurity we simulate the trajectories of N = 5000 hypothetical particles with initial momenta equally spaced between p = 0.4 and p=2 by doing one iteration of the algorithm described in <ref>. Initially the impurity is empty. After each step we store the computed pdr and 1dr and use it for the simulation of particle trajectories in the next step. The endpoints of the particle trajectories give a distribution of outgoing particles, which we feed back into the simulation outside of the impurity. < g r a p h i c s > Simulation of hard rods scattering at an impurity V(x) = λ e^-12x^2 with λ = 0.4, for d=0 (non-interacting), d=0.3 and d=-0.3. The density plots show the distribution of n(t,xmacro,p) for t=0, t=1.5 and t=3. For time t=1.5 and t=3 we also plot some trajectories of particles at the impurity, together with the current effective potential W(x) (inset). The red lines at p_0 = ±√(2λ) (solid) and p=0 (dashed) are given for comparison with the non-interacting case. In Figure <ref> we give the result of the simulation for d=0 (non-interacting particles), d=0.3 and d=-0.3. The height of the potential is λ = 0.4. The initial state at t=0 is given by (<ref>). We show the density plots at 3 different times, t = 0, t=1.5 and t=3. In addition for t=1.5, and t=3 we also depict the trajectories at the impurity and the effective potential W(x) (inset). The red solid line indicate the momenta p_0 = √(2λ) whose kinetic energy equal the height of the potential in the non-interacting case. For non-interacting particles, particles with a higher momenta are transmitted and particles with lower momenta are reflected. The scattering of interacting particles is different: For positive d=0.3, since 1dr(x) < 1, the effective potential W(x) is smaller than the bare potential V(x) (see equation <ref>). Thus, more particles can penetrate the impurity. Also since W(∞)<0 the transmitted particles have higher momenta than the corresponding incoming ones. For negative d<0 we have 1dr(x) > 1 and thus the effective potential is increased. In the beginning of the simulation when the density of particles is very low, particles are transmitted. At time t=1.5 the density at the impurity is so high that most particles are reflected. Only towards the end of the simulation, at t=3, the density decreases again, which decreases the effective potential and allows more particles to be transmitted. § EXPLICIT CONSTRUCTION OF A SOLUTION FOR WEAK IMPURITIES In this appendix we give the explicit construction of the solution for a sufficiently weak impurity discussed in Section <ref>, i.e. we want to show the following observation: The computations below are a bit technical, the idea, however, is very simple. Since the impurity is weak the level sets of the effective Hamiltonian (i.e. the trajectories of particles) will be almost straight. Therefore no reflection occurs. This allows for the following construction: Let us call pL(-∞) the smallest incoming momentum from the left and pR(∞) the smallest incoming momentum from the right. Then there will be two curves pL(x) and pR(x) which separate the space into three distinct parts: In the region p>pL(x) we have particles coming from the left to the right and in the region p<pR(x) we have particles coming from the right. In the region pR(x)<p<pL(x) the density vanishes. Note that this construction only works if there are no reflected particles. For convenience let us redefine the Hamiltonian by adding a constant, s.t. H(-∞, v_p=0) = 0. Also let us redefine: NL(h) = ∫_H(-∞,pL(-∞))^hh n(-∞, H^-1(-∞,h)) NR(h) = ∫_H(-∞,pR(-∞))^hh n(-∞, H^-1(-∞,h)) where in both definitions we use different branches of H^-1(-∞,·). Then we can write: N(x,p) = NL(H(x,p)) p>pL(x) NR(H(x,p)) p<pR(x) 0 pR(x)<p<pL(x). We now have to find a solution to the adapted fixed point equation: U(x,p) = [ ∫_q>pL(x)qT(p-q) NL(H(-∞,q) + λ V(x,q) + U(x,q)) +∫_q<pR(x)qT(p-q) NR(H(-∞,q) + V(x,q) + U(x,q)) - ∫_q>pL(-∞)q T(p-q) NL(H(-∞,q)) - ∫_q<pR(-∞)q T(p-q) NR(H(-∞,q))]. Note that pL/R implicitly depend on U(x,p). For λ = 0 a solution to this equation is simply given by U(x,p) = 0 and pL/R(x) = pL/R(-∞). We already established in Section <ref> that the Jacobian is given by the inverse dressing operation. Because the dressing has to be defined for λ = 0, we know that the Jacobian is invertible at λ = 0. Therefore, there will be a small region |λ| < λ_1 in which a solution to the fixed point equation will exist (mathematically this follows from the implicit function theorem), which will also be close to the unperturbed solution |U|≤ U_0 = λ. This establishes, for sufficiently small λ, the existence of the effective Hamiltonian H(x,p) describing the scattering situation (even though we do not know its explicit form). However, this Hamiltonian was found on the assumption that there is no reflection. For consistency we now need to check that this Hamiltonian H(x,p) = H(-∞,p) + λ V(x,p) + U(x,p) does not have any critical points in the regions p>pL(x) and p<pR(x) (the existence of such a critical point is necessary for reflection). Without loss of generality consider the region p>pL(x) (the other one can be treated equivalently). The boundary of the region is determined via: H(x,pL(∞))= E_0(pL(∞)) + λ V(x,pL(∞)) + U(x,pL(∞)) = E_0(pL(-∞)) since pL(x) follows a level set of H. By definition any critical point has to satisfy ∂_p H(x,p) = 0: ∂_p E_0 + λ∂_p V(x,p) + ∂_p U(x,p) = 0. In case λ = 0 then pL(x) = pL(-∞) and ∂_p E_0(pL(-∞)) ≥ C. By continuity, if λ is small (and therefore U is small) there will be a region |λ| < λ_2 in which |λ∂_p V(x,p) + ∂_p U(x,p)| < C and therefore H(x,p) does not have any critical points for |λ| < λ_2. The same is true for p<pR(x) and thus our ansatz of H(x,p) is consistent: For sufficiently small λ there cannot be any critical points and therefore no `islands' ΛL/R_i and also no reflection. This finishes the construction of the solution to the scattering problem for sufficiently small λ. We now go on to show that for this solution the state on the left and on the right side are actually coincide (meaning that all particles just flow `around' the impurity, without being affected by it). Note that this is not trivial: For instance, one could imagine that all particles are transmitted but their final momenta might be shifted compared to their initial momenta. In order to study this let us look again at equation (<ref>). This fixed point equation is in fact independent for different x, which – in this case – holds globally since there are no critical points of the Hamiltonian in the regions where particles are. Now imagine starting at x = -∞ and observing how the fixed point U(x,p) varies as x runs from x =-∞ to x =∞. U(x,p) change adiabatically as function of λ V(x,p), but will stay in some finite region |U| ≤ U_0, where U_0 depends on λ. Again, since the Jacobian is given by inverse dressing operation, and the dressing is invertible at V = 0, the Jacobian will be invertible for all impurity potentials V(x,p) in a neighbourhood around V = 0 as well (mathematically this follows from the inverse function theorem). In this neighbourhood the solution U(x,p) will be a unique function U(x,p) = U[λ V(x,p)](x,p) of λ V(x,p). If λ is small enough |λ| < λ_3 then λ V(x,p) will stay inside this neighbourhood. Since for x →∞ we have that V(x,p) → 0, far away from the impurity the solution is given by U(x →∞,p) = U[0](x,p). By construction however, we know that U[0](x,p) = 0. Therefore, the Hamiltonians on the left and the right side will be equal, which in turn implies the same for the states on both sides. This concludes the derivation. § REFERENCES
http://arxiv.org/abs/2307.00379v1
20230701162955
Residual-based attention and connection to information bottleneck theory in PINNs
[ "Sokratis J. Anagnostopoulos", "Juan Diego Toscano", "Nikolaos Stergiopulos", "George Em Karniadakis" ]
cs.LG
[ "cs.LG", "physics.comp-ph" ]
inst1,label1]Sokratis J. Anagnostopoulos [inst1]organization=Laboratory of Hemodynamics and Cardiovascular Technology, EPFL, city=Lausanne, postcode=1015, state=VD, country=Switzerland inst2, label1]Juan Diego Toscano inst1] Nikolaos Stergiopulos inst2,inst3]George Em Karniadakis [inst2]organization=School of Engineering, Brown University, city=Providence, postcode=02912, state=RI, country=USA [inst3]organization=Division of Applied Mathematics, Brown University, city=Providence, postcode=02912, state=RI, country=USA [label1]The first two authors contributed equally to this work Driven by the need for more efficient and seamless integration of physical models and data, physics-informed neural networks (PINNs) have seen a surge of interest in recent years. However, ensuring the reliability of their convergence and accuracy remains a challenge. In this work, we propose an efficient, gradient-less weighting scheme for PINNs, that accelerates the convergence of dynamic or static systems. This simple yet effective attention mechanism is a function of the evolving cumulative residuals and aims to make the optimizer aware of problematic regions at no extra computational cost or adversarial learning. We illustrate that this general method consistently achieves a relative L^2 error of the order of 10^-5 using standard optimizers on typical benchmark cases of the literature. Furthermore, by investigating the evolution of weights during training, we identify two distinct learning phases reminiscent of the fitting and diffusion phases proposed by the information bottleneck (IB) theory. Subsequent gradient analysis supports this hypothesis by aligning the transition from high to low signal-to-noise ratio (SNR) with the transition from fitting to diffusion regimes of the adopted weights. This novel correlation between PINNs and IB theory could open future possibilities for understanding the underlying mechanisms behind the training and stability of PINNs and, more broadly, of neural operators. Residual-based attention PINNs convergence accuracy information bottleneck theory, self-adaptive weights. § INTRODUCTION Physics-informed neural networks (PINNs) offer an alternative to traditional numerical methods for solving partial differential equations (PDEs) in forward and inverse problems<cit.>. In the PINN approach, a neural network is trained to approximate the solution of a PDE by minimizing a composite loss function that includes terms related to the physical laws and data from initial boundary conditions, simulations, or experiments <cit.>. Keeping a balance between the loss terms is crucial to avoid convergence and accuracy issues, especially when dealing with highly nonlinear, multi-scale, or chaotic behavior problems<cit.>. To address this imbalance, researchers have developed several approaches that can be roughly classified into three categories: neural network modifications, PDE transformations, and adaptive weighting strategies. A neural network modification addresses the multi-layer perceptron optimization capabilities via model reparametrizations <cit.>, input dimension expansions <cit.>, sequential learning <cit.>, or adaptive activation functions <cit.>. On the other hand, PDE transformations attempt to simplify the problem by enforcing the physical laws in their homologous forms. For instance, by implementing the PDE on its constrained expression, the number of losses can be reduced, and the optimizer can focus on a single condition <cit.>. Similarly, by using alternate or auxiliary physical laws, it is possible to reduce the number of equations or decrease the PDE order <cit.>, which eases the optimization process. Adaptive weighting strategies address the loss imbalances by iteratively altering the contribution of particular terms or regions of the domain during the training process. This modification can be done indirectly by adaptively resampling points in crucial areas <cit.> or directly by assigning multipliers that adjust the contribution of each error function. For instance, Wang et al.,<cit.> proposed a learning rate annealing algorithm that balances the losses' weights based on their back-propagated gradients. Wang et al. <cit.> also defined a causal parameter for time-dependent problems that force the model to learn sequentially based on the time steps. Global multipliers generally modify the average contribution of each loss term <cit.>. Similarly, these multipliers can be applied locally (i.e., per training point); McCLenny et al. <cit.> proposed a self-adaptive (SA) method where the loss weights are trained via adversarial training. Zhang et al. <cit.> extended this concept and proposed a differentiable adversarial self-adaptive (DASA) pointwise weighting scheme that uses a subnetwork to find the optimal weights. These weighting strategies enhance the capabilities of PINNs and enable them to address challenging problems in diverse fields. Nevertheless, their implementation can be expensive since they require coupling auxiliary networks <cit.>, training additional multipliers <cit.>, or computing and processing gradients <cit.>. Moreover, some of these approaches grow unboundedly <cit.> and do not relate to the neural network training capabilities, which may result in over-refining some regions while ignoring others, as pointed out by <cit.>. To address these problems, we propose a bounded residual-based attention (RBA) scheme that efficiently computes adaptive weights for the collocation points based on the residuals of the given PDEs. The suggested weighting approach considers the neural network training dynamics and adapts itself during the training process. Furthermore, as the optimizer significantly exceeds the accuracy threshold of the vanilla PINN, we observe two distinct learning phases, evident in the evolution of weight distributions. We perform subsequent gradient analysis, and we associate this behavior with the information bottleneck (IB) theory, which proposes an optimal method to devise a condensed representation of an input while preserving the majority of information related to an output <cit.>. This approach employs the concept of mutual information, which measures the knowledge gained about one random variable through the observation of another. According to the IB theory, a well-functioning model should retain essential output information while discarding insignificant input details, thereby creating an “information bottleneck” <cit.>. This paper is organized as follows. In Section <ref>, we provide an overview of PINNs, introduce the RBA weighting scheme, and review additional modifications that enhance the model performance. Then, in Section <ref>, we present the implementation of our method through two benchmark problems and conduct an ablation study to interpret the relative contribution of each modification. In Section <ref>, we analyze the evolution of our multipliers during training and link their behavior with the bottleneck theory. Finally, in Section <ref>, we summarize the findings and discuss our future work. In the appendix, we include details regarding the implementation. § METHODS §.§ Physics-Informed Neural networks In the context of Physics-Informed Neural Networks (PINNs), a general partial differential equation (PDE) can be expressed as: ℒ{u(x,t)} = f(x,t), where u(x,t) is the unknown function we wish to approximate, ℒ denotes the differential operator, and f(x,t) is a given source or forcing function that introduces external influences to the system. The differential operator ℒ depends on the specific PDE under consideration. For instance, in the case of the heat equation, ℒ = ∂/∂ t - κ∇^2, where κ is the thermal diffusivity and ∇^2 is the Laplace operator. Additionally, the problem may be constrained by several types of boundary conditions. These conditions are typically categorized into Dirichlet, Neumann, Robin, or Periodic, and are represented as follows: u(x_b,t) = g_d(x_b,t), (Dirichlet) ∂u/∂n(x_b,t) = g_n(x_b,t), (Neumann) αu(x_b,t) + β∂u/∂n(x_b,t) = g_r(x_b,t), (Robin) u(x+L,t) = u(x,t), (Periodic) where x_b denotes the boundary points. The Dirichlet condition defines the function value at the boundary, the Neumann condition specifies the normal derivative at the boundary, and the Robin condition is a weighted combination of the function and its normal derivative. For the Periodic condition, the function value repeats after a certain length L. The loss function in PINNs is designed to encompass the deviation of the neural network solution from the initial conditions, boundary conditions, and the PDE itself. The influence of these components in the loss function is balanced using Lagrange multipliers λ_r, λ_ic and λ_bc: ℒ = λ_ic·ℒ_ic + λ_bc·ℒ_bc + λ_r ·ℒ_r, where ℒ_ic, ℒ_bc, and ℒ_r represent the losses associated with the initial conditions, boundary conditions, and the residuals of the PDE, respectively. In general, each loss term ℒ_i takes the form: ℒ_i = ⟨ h_i(u_NN) ⟩, for i ∈{ic, bc, r} where ⟨·⟩ is the mean operator and h_i is a function of the neural network u_NN, associated with the error between the prediction and the ground truth. Each of these terms encourages the neural network to conform to the respective physical laws or initial/boundary conditions encoded in the problem. The Lagrange multipliers can be global (e.g., scalars balancing each individual loss term) or local (e.g., vectors balancing each collocation or boundary point). In the most basic PINN definition, λ=1 for all loss terms. The training of the neural network weights is performed using an optimization method like stochastic gradient descent (SGD). In each iteration k, an update is applied to a weight w as follows: w^k+1 = w^k - η·∇_w ℒ, where η is the learning rate, and ∇_w ℒ is the gradient of the loss function with respect to w. This process iteratively reduces the loss, pushing the solution towards satisfying the given PDE and boundary/initial conditions. §.§ Residual-based attention (RBA) scheme One of the inherent challenges in obtaining accurate results with PINNs is that the residuals of key collocation points can get overlooked by the mean calculation of the objective function (Eq. <ref>). Consequently, despite a decrease in total loss during training, certain spatial or temporal characteristics might not be fully captured. This issue becomes particularly pronounced in multiscale problems, where such troublesome areas could not only result in a lack of detail but could also impair the flow of important information from the initial and boundary conditions into the domain of interest, thereby hindering convergence. Although mini-batching techniques present a potential solution to this issue, the difficult task of a priori identifying the problematic regions remains. A workaround for this problem is selecting a set of Lagrange multipliers, either global or local, which can be adjustable during training. The aim of the global multipliers is to balance different terms of the loss function, while local multipliers aim at weighting the influence of specific collocation points of the domain. Establishing a systematic rule for updating these multipliers can significantly aid the optimizer, with several successful strategies documented in the literature, demonstrating major improvements to the vanilla PINNs performance <cit.>. Furthermore, as opposed to classical numerical methods, which can guarantee stability, the training convergence of PINNs is affected by the evolution of residuals which will have some degree of stochasticity, leading to a corresponding degree of instability. Driven by the aforementioned challenges, the initial objective of this work is the development of a simple, gradient-less weighting scheme based on the rolling history of the cumulative residuals to extend the attention span of the optimizer. The proposed scheme aims to increase attention to the challenging regions of information propagation in both space and time dimensions defining the problem. The update rule for the proposed residual-based Lagrange multipliers for any collocation point i on iteration k is given by: λ_i^k+1←γλ_i^k+η^*|r_i|/max_i(|r_i|), i ∈{0, 1, ..., N_r}, where N_r is the number of collocation points, r_i is the PDE residual for point i, γ is the decay parameter and η^* is the learning rate, respectively. Note that the learning rate of the optimizer η and the learning rate of the weighting scheme η^* are two different hyperparameters. This is a convergent linear homogeneous recurrence relation. Given that |r_i|/max_i(|r_i|)∈ [0, 1] and λ_i^0≠ 0, ∀ i ∈{0, 1, ..., N_r}, its bounds are given by: λ_i^k∈ (0, η^*/1-γ], as k →∞. These local multipliers scale with the normalized cumulative residuals, but they also behave dynamically during training due to the decay γ as follows: since γ lies between 0 and 1, λ^k will gradually decrease in its contribution to λ^k+1, as k increases. Simultaneously, η^*|r_i|/max(|r_j|) is being added in each iteration, leading to an eventual equilibrium. However, as different points i may be targeted depending on the stage of the training process, the distribution of λ_i will dynamically vary across the domain of interest. The main advantages of the proposed residual weighting scheme are summarized as follows: * Deterministic operation scheme, bounded by the γ and η^* parameters, which ensure the absence of exploding multipliers. * No training or gradient calculation is involved, leading to negligible additional computational cost. * Scaling with the cumulative residuals guarantees increased attention on the solution fronts where the PDEs are unsatisfied in both spatial and temporal dimensions. The modified training process for an indicative standard Gradient Descent optimization method is outlined in Algorithm 1. We define the RBA weights as λ_i and initialize them with 0 values for all collocation points, noting that λ_i = √(λ_r) (from Eq. <ref>). For all training examples, the RBA learning rate and decay hyperparameters are set as η^* = 0.01 and γ = 0.999, respectively. A constant c may be added to the updated weights to raise the lower bound and adjust the ratio max(λ_i)/min(λ_i) (for this study c=0). §.§ Additional PINN enhancements §.§.§ Modified multi-layer perceptrons For the design of the network architecture, we leverage the recently introduced modified multi-layer perceptron (mMLP) <cit.>, which draws its inspiration from the attention mechanisms prominent in transformers <cit.>. The mMLP aims at augmenting the efficacy of PINNs by embedding the input variables x into the hidden states of the network. Initially, these inputs are encoded in a feature space by employing two distinct encoders, U and V. They are then assimilated within each hidden layer of a conventional MLP by point-wise multiplication. The encoders are given by: U = σ(x^0w^U + b^U), V = σ(x^0w^V + b^V) We implement a variation of this concept by performing the encoding operation right before each activation function, as it performed better for our experiments. Thus, each forward pass is defined as: α^l(x) = α^l-1(x)w^l + b^l, for l ∈{1, 2, ..., layers} α^l(x) = (1-α^l(x)) ⊙ U + α^l(x) ⊙ V α^l(x) = σ(α^l(x)) where x is the input, α^l and w^l are the neurons and weights of layer l, σ is the activation function and ⊙ is the element-wise product. §.§.§ Exact imposition of boundary conditions The inexact imposition of boundary conditions can negatively affect the accuracy and training stability of neural networks <cit.>. Dirichlet Boundary Conditions A recent study by Sukumar et al. <cit.> showed how to impose Dirichlet, Neuman, and Robin boundary conditions in PINNs by using approximate distance functions (ADF). The constrained expression for Dirichlet boundary conditions is defined in equation <ref>. u(𝐱)=g(𝐱)+ϕ(𝐱)u_NN(𝐱), where g(𝐱) is a function that satisfies u along the boundaries, u_NN(𝐱) is the output of our neural network, and ϕ(𝐱) is a composite distance function that equals zero when evaluated on the boundaries. If the boundary is composed of M partitions [S_1,...S_M], the composite distance function for Dirichlet boundary conditions can be described as: ϕ(ϕ_1,ϕ_2,...,ϕ_M)=Π_i^M ϕ_i, where [ϕ_1,..,ϕ_M] are the individual distance functions. Notice that if 𝐱∈ S_i, then ϕ(𝐱)=0 and the neural network approximation exactly satisfies the boundary conditions, i.e., u(𝐱)=g(𝐱). Periodic Boundary Conditions Similarly, it is possible to enforce periodic boundary conditions as hard constraints strictly by constructing Fourier feature embeddings of the input data <cit.>. In particular, the periodic constraint of a smooth function u(x) can be encoded in a neural network using a one-dimensional Fourier feature embedding (See equation <ref>). v(x,y)=[1,cos(ω_x x),sin(ω_x x),...,cos(mω_x x),sin(mω_x x)] In the same way, a neural network can incorporate 2D periodic constraints by utilizing a two-dimensional Fourier feature embedding <cit.>. p(x,y)=[ cos(ω_x x)cos(ω_y y),...,cos(nω_x x)cos(mω_y y); cos(ω_x x)sin(ω_y y),...,cos(nω_x x)sin(mω_y y); sin(ω_x x)cos(ω_y y),...,sin(nω_x x)cos(mω_y y); sin(ω_x x)sin(ω_y y),...,sin(nω_x x)sin(mω_y y); ] where, w_x=2π/P_x, w_y=2π/P_y, m,n are a non-negative integers, and P_x and P_y are the periods in the x and y directions. Dong et al. <cit.> proved that any network representation u_NN(v(x)) and u_NN(p(x,y)) are periodic in the x and x,y directions, respectively. § RESULTS §.§ Dynamic case: 1D Allen-Cahn equation The Allen-Cahn equation is a compelling benchmark for PINNs due to its challenging characteristics. It is regarded as a “stiff" PDE, a term that describes equations that require careful numerical handling to avoid instability in their solutions. This complexity arises from its ability to produce solutions with pronounced, sharp transitions in both spatial and temporal dimensions. The 1D Allen-Cahn PDE is given by: ∂ u/∂ t = 10^-4∂^2 u/∂ x^2 - 5u^3 + 5u, with initial condition and periodic boundary conditions: u(0, x) = x^2 cos(π x), ∀ x ∈ [-1, 1], u(t, x + 1) = u(t, x - 1), ∀ t ≥ 0 and x ∈ [-1, 1]. To obtain the solution of Allen-Cahn PDE in Figure <ref>, we train a PINN for 3·10^5 iterations with the standard Adam optimizer <cit.> and an exponential learning rate scheduler on 25600 collocation points. For the periodic boundary conditions, we use the formulation of section <ref> and set λ_ic = 100, following the benchmark setup of the recent literature <cit.>. The rest of the parameters are mentioned in the appendix. The benchmark comparison of the final relative L^2 error obtained by different methods is presented in Table <ref>, where the error for the RBA weights represents the average of five runs with random seeds. It should be noted that only the best performing methods of Table <ref> (<cit.>, <cit.>) utilize the PINN enhancements discussed in section <ref>. The relative L^2 is defined as: relative L^2 = ‖ Exact - Predicted ‖_2/‖ Exact ‖_2 §.§.§ Ablation Study for Allen-Cahn An ablation study involves measuring the performance of a system after removing one or more of its components to measure the relative contribution of the ablated components <cit.>. To investigate the effect of the adopted methods on the relative L^2 convergence, we perform a single-component ablation study and report the results in Table <ref>. The best-performing model of Table <ref> is selected to represent the full model. For each run, a component is removed to identify the effect of the omitted method on the performance. Figure <ref> shows the convergence histories of each run. As evident from these results, the coupling of the RBA scheme along with the Fourier feature embedding is the most important component to achieving low relative L^2. Moreover, the RBA weights initiate a steep convergence trajectory at 12000 iterations, achieving a final L^2 of 3.16· 10^-3 without any additional component. Combined with the modified MLP and the Fourier features, the full model leads to the best L^2 at 4.5· 10^-5. The adopted mMLP accelerates the convergence in the first 60000 iterations, while it does not provide significant accuracy gains in the long run. The noise associated with the best-performing methods indicates that the optimizer is “jumping" over local minima of the loss landscape and effectively not getting stuck in a bad solution. §.§ Static case: 2D Helmholtz equation The Helmholtz equation is commonly employed to model wave and diffusion phenomena, capturing changes over time in either a spatial or combined spatial-temporal domain. The 2D Helmholtz PDE is expressed as follows: ∂^2 u/∂ x^2+∂^2 u/∂ y^2+k^2u-q(x,y)=0 q(x,y)= -(a_1π)^2sin(a_1π x)sin(a_2π y) -(a_2π)^2sin(a_1π x)sin(a_2π y) +ksin(a_1π x)sin(a_2π y), where q(x,y) is the forcing term which leads to the analytical solution u(x,y)=sin(a_1π x)sin(a_2π y) <cit.>. The boundary conditions are defined as follows: u(-1,y)=u(1,y)=u(x,-1)=u(x,-1)=0 To allow a direct comparison with previous studies, we set a_1=1, a_2=4, k=1 and follow the experimental setup described in <cit.>, setting λ_bc = 100. We initially apply a combination of Dirichlet and Periodic boundary conditions, with the latter enforced using the Fourier feature embedding. However, to avoid encoding the PDE analytical solution (i.e., sin(π x)sin(4π y)), we expand our inputs (i.e., x,y) using a truncated embedding with m=n=5 (Eq. <ref>): p(x,y)=[ cos(ω_x x)cos(ω_y y),...,cos(nω_x x)cos(nω_y y); cos(ω_x x)sin(ω_y y),...,cos(nω_x x)sin(nω_y y); sin(ω_x x)cos(ω_y y),...,sin(nω_x x)cos(nω_y y); ] Then, we approximate the solution of the PDE as u_F=u_NN(p(x,y)), where u_NN is the modified MLP. Additionally, to demonstrate the flexibility of RBA weights, we examine an additional case applying only Dirichlet boundaries with approximate distance functions (ADF). Towards this end, we split the BCs into four partitions, i.e., y=1, y=-1, x=1, and x=-1, and use ϕ=(x^2-1)(y^2-1) as the composite distance function. Based on equation <ref> we define g(𝐱)=0 and approximate the solution of the Helmholtz equation as, u_ADF=(x^2-1)(y^2-1)u_NN(x,y), which exactly satisfies the BCs. We train our models (i.e., u_F and u_ADF) using the Adam optimizer for 2· 10^4 iterations, followed by L-BFGS for 3· 10^3 iterations, noting that the RBA weights can be deployed for both optimizers. Figure <ref> illustrates the exact solution of the 1x4 Helmholtz PDE alongside the corresponding prediction obtained from the PINN. Furthermore, in Table <ref>, we compare the average L^2 errors for the Fourier and ADF exact boundary imposition methods with the current state-of-the-art methods. §.§.§ Ablation Study for Helmholtz In this ablation study for the 2D Helmholtz, we present the convergence histories for each ablated system (Fig. <ref>) and summarize the final L^2 errors in Table <ref> and <ref>. For the first case (i.e., u_F), the Fourier feature embedding has the most significant influence in reaching a minimized relative L^2. The accuracy further improves paired with either the RBA scheme or the mMLP. The best-performing seed leads to an L^2 of 8.91· 10^-6; however, we note that this method is employed for comparison purposes since the Fourier features incorporate the trigonometric traits of the analytical solution. Therefore, we also perform the same study using distance functions (u_ADF), which indicates that the essential component is the ADF (Table <ref>). By exactly imposing boundary conditions, the number of loss terms is reduced, allowing the optimizer to focus on minimizing a single error function. §.§ RBA weight evolution Figure <ref> shows the evolution of RBA weights for the previously analyzed cases. For each run, the maximum value is bounded by Eq. <ref>, while the upper bound for this study equals 10. The distribution of weights significantly varies depending on the stage of training; hence the maximum values fluctuate while approaching the upper bound. The fluctuation of the mean values is less pronounced until they finally converge to ≈ 20% of the upper bound. This behavior indicates that, on average, the total magnitude of weights remains constant while their distribution varies as the optimizer focuses on different parts of the domain. Thus, by introducing the decay factor γ, we can effectively prevent exploding weight values. At the same time, this allows the weights to be flexibly distributed, adapting as needed throughout different stages of the solution process. § INFORMATION BOTTLENECK (IB) THEORY The information bottleneck theory was developed as a framework for understanding the trade-off between compression and prediction in supervised learning <cit.>. It formulates an idealized principle for creating a compressed, or “bottlenecked," representation of an input variable that retains as much information as possible about an output variable. The theory utilizes the concept of mutual information, which measures the amount of information obtained about one random variable by observing another. It proposes that a good model will retain all the relevant information about the output while discarding irrelevant details about the input, creating an “information bottleneck." §.§ Fitting and Diffusion Phases of Learning Recently, it has been proposed that the process of deep learning can be divided into two primary phases: the fitting phase and the diffusion phase <cit.>. * Fitting Phase: In this phase, the model learns to correctly classify the training data, gradually reducing the empirical error. The model can memorize the data, building complex representations that fit the prediction set. This phase tends to decrease both the training and test errors and is characterized by a high Signal-to-Noise Ratio (SNR), indicating that the model is capturing more of the useful signal (important features of the data) compared to the noise (irrelevant or less meaningful details). In this context, “noise" is the random or unwanted information that does not contribute to the learning task. * Diffusion Phase: After the model has learned to fit the data, it continues to learn, but in a subtly different way. The network weights start to diffuse to improve the model's generalization capabilities. The model simplifies the internal representations of the learned patterns by keeping the important features and discarding the irrelevant ones, thus effectively reducing the complexity of the representations. The training error remains the same in this phase, but the test error can decrease. The SNR also decreases as the model becomes more focused on preserving the meaningful signals and is less influenced by the noise. These two distinct phases can be seen as a process where the model first fits the data (captures relevant information) and then compresses it (discards irrelevant information), further enhancing its generalization ability. §.§ Connection of IB theory to PINNs The benchmark examples demonstrate that the proposed residual-based method can extend the training accuracy beyond the original PINN formulation. Upon inspection of the weight evolution during the training process, we observed a behavior that largely met our expectations; the weights indeed form an ordered pattern consistent with the flow of information, primarily dictated by the initial conditions (Figs. <ref>,<ref>). Yet, a surprising element emerged as we noted an abrupt shift to chaotic patterns, occurring once the model descended below a specific test loss threshold (defined by the relative L^2 error). Hence, two definite phases, akin to the “fitting" and “diffusion" from the IB theory, became apparent. Following the formulation proposed by the IB theory, we performed a gradient analysis to quantify the transitions from high to low SNR. The normalized signal, ‖μ‖ and noise, ‖σ‖ of the stochastic gradient distributions for each hidden layer l are given by: ‖μ_l‖ = ‖⟨∇_w ℒ_i ⟩‖_F/‖ w_i ‖_F ‖σ_l‖ = ‖ std( ∇_w ℒ_i ) ‖_F/‖ w_i ‖_F SNR = ‖μ_l‖/‖σ_l‖, where i is the mini-batch and ‖·‖_F is the Forbenius norm. To apply this formulation, we divide our collocation points into 100 mini-batches and perform gradient calculations per batch for every 50 iterations of the optimizer without performing a backward pass. Hence, we deduce the batch-wise gradient information while the models are trained with the full batch. We present the findings for the indicative 1D Allen-Cahn (Fig. <ref>) and 2D Helmholtz with a_1=a_2=6 (Fig. <ref>) runs. We define the three regimes of RBA weights evolution as Phase I (fitting), Phase II (transitional), and Phase III (diffusion). During these phases, the RBA weights undergo a swift change from order to disorder which is accompanied by a high to low SNR transition of the layer gradients, respectively. We also plot the normalized Loss and normalized relative L^2 history to compare their convergence rates. In both cases, the convergence history of the normalized Loss and L^2 indicates that the model continues to learn after the Loss has reached a plateau by further decreasing the L^2 error. In addition, the magnitudes of the network's weights ‖ w ‖ begin their final convergence trajectory at the beginning of the diffusive phase. For both examples, a fully connected MLP is used as it produces more pronounced transitions. It is important to mention that the convergence histories show the monotonic decrease of the Loss and L^2 errors. The animations of the presented cases, along with the relevant code, will be available on https://github.com/soanagno/rba-pinns. § SUMMARY In this work, we propose a fast, residual-based attention scheme which can enhance the accuracy of PINNs in both static and dynamic cases, competing with the state-of-the-art performing models of the literature. Moreover, as the ablation study points out, a critical aspect of the solution accuracy is the formulation of exact boundary conditions. Additionally, even though the RBA multipliers provide considerable accuracy improvements, we believe that an equally important element of this type of weighting strategy is in the realms of convergence. For example, the 1D Allen-Cahn equation cannot be solved by the vanilla PINN due to its stiff nature, but RBA can initiate an early convergence trajectory. Furthermore, the versatility of the proposed method allows for its application with a few additional lines of code. In addition, given that the adopted methodology can overcome a certain accuracy threshold, we provide evidence that the learning process is dominated by two distinct learning regimes, as suggested by the IB theory. In the first fitting phase, the model learns the most important parts of the solution. The distribution of the RBA weight values follows the main shape of the solution while it sets the stage for the next learning phase. In the diffusion phase, the weights stop following the flow of information from the initial conditions and start focusing on unregulated parts of the domain, resembling a diffusion process. Based on the IB theory, it is during this final phase that the model finds the best balance between detail and simplicity by tweaking what was learned in the fitting phase, hence refining its understanding of the problem. This novel connection between IB theory and PINNs could open new possibilities for quantifying the performance of the many versions of PINNs and even neural operators. Furthermore, in applications where accuracy is essential, utilizing methods that collectively aid the optimizer in overcoming the threshold of the fitting phase could be a critical convergence criterion. Planned future work is focused on expanding the application of RBA weights to a broader spectrum of problems and testing the sensitivity of the associated hyperparameters. Additionally, we aim to undertake comprehensive parametric analyses of the bottleneck phase transition to enhance our understanding of its occurrence timing, underlying causes, and possible strategies to instigate it. § ACKNOWLEDGEMENTS SJA and NS thank the Swiss National Science Foundation grant “Hemodynamics of Physiological Aging" (Grant nr 205321_197234). JDT and GEK acknowledge support by the DOE SEA-CROGS project (DE-SC0023191), the MURI-AFOSR FA9550-20-1-0358 project, and the ONR Vannevar Bush Faculty Fellowship (N00014-22-1-2795). Finally, we extend our gratitude to Prof. N. Sukumar from the University of California, Davis, for engaging in insightful discussions on the approximate distance functions. § IMPLEMENTATION DETAILS The chosen architecture for all benchmarks is an MLP of 6 hidden layers and 128 neurons per layer, using the hyperbolic tangent (tanh) activation function. The weights are initialized with Xavier normal distribution <cit.>. The rest of the case-specific information and their corresponding training time is tabulated in the following sections. However, the reported training times should be used only as an inner reference since the codes were not optimized for speed. §.§ Allen-Cahn §.§ Helmholtz §.§ 2D Helmholtz-Bottleneck § SNAPSHOTS OF WEIGHTS AND RESIDUALS This section displays indicative snapshots of the RBA weights and the corresponding residual evolution for the Allen-Cahn and Helmholtz models. As stipulated by the information bottleneck theory, during the training process we can observe the fitting, transition, and diffusion learning phases. The interplay between weights and residuals is evident as the former keep track of the short-term residual history. Therefore, the RBA weights may differ from the network's residuals during the early iterations but as the latter stabilize in the final stages of convergence, the weights tend to follow suit, reflecting the trends observed in the PDE residuals. elsarticle-num
http://arxiv.org/abs/2307.03211v1
20230706144727
PseudoCell: Hard Negative Mining as Pseudo Labeling for Deep Learning-Based Centroblast Cell Detection
[ "Narongrid Seesawad", "Piyalitt Ittichaiwong", "Thapanun Sudhawiyangkul", "Phattarapong Sawangjai", "Peti Thuwajit", "Paisarn Boonsakan", "Supasan Sripodok", "Kanyakorn Veerakanjana", "Phoomraphee Luenam", "Komgrid Charngkaew", "Ananya Pongpaibul", "Napat Angkathunyakul", "Narit Hnoohom", "Sumeth Yuenyong", "Chanitra Thuwajit", "Theerawit Wilaiprasitporn" ]
q-bio.QM
[ "q-bio.QM", "cs.CV", "cs.LG", "eess.IV" ]
PseudoCell: Hard Negative Mining as Pseudo Labeling for Deep Learning-Based Centroblast Cell Detection Narongrid Seesawad, Piyalitt Ittichaiwong, Thapanun Sudhawiyangkul, Phattarapong Sawangjai, Peti Thuwajit, Paisarn Boonsakan, Supasan Sripodok, Kanyakorn Veerakanjana, Phoomraphee Luenam, Komgrid Charngkaew, Ananya Pongpaibul, Napat Angkathunyakul, Narit Hnoohom, Sumeth Yuenyong, Chanitra Thuwajit, and Theerawit Wilaiprasitporn Senior Member, IEEE This work was supported by grants from New Discovery & Frontier Research Grant, Mahidol University. (Corresponding authors: Piyalitt Ittichaiwong, Thapanun Sudhawiyangkul, Chanitra Thuwajit, and Theerawit Wilaiprasitporn.) N. Seesawad, T. Sudhawiyangkul, P. Sawangjai, P. Luenam, and T. Wilaiprasitporn are with the Bio-inspired Robotics and Neural Engineering (BRAIN) Lab, School of Information Science and Technology (IST), Vidyasirimedhi Institute of Science & Technology (VISTEC), Rayong, Thailand (theerawit.w@vistec.ac.th) P. Ittichaiwong, and K. Veerakanjana are with the Siriraj Informatics and Data Innovation Center, Faculty of Medicine Siriraj Hospital, Mahidol University, Bangkok, Thailand P. Thuwajit, and C. Thuwajit are with the Department of Immunology, Faculty of Medicine Siriraj Hospital, Mahidol University, Bangkok, Thailand P. Boonsakan is with the Department of Pathology, Faculty of Medicine Ramathibodi Hospital, Mahidol University, Bangkok, Thailand S. Sripodok, K. Charngkaew, A. Pongpaibul, and N. Angkathunyakul are with the Department of Pathology, Faculty of Medicine Siriraj Hospital, Mahidol University, Bangkok, Thailand N. Hnoohom, and S. Yuenyong are with the Department of Computer Engineering, Faculty of Engineering, Mahidol University, Nakhon Pathom, Thailand August 1, 2023 ============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================= Patch classification models based on deep learning have been utilized in whole-slide images (WSI) of H&E-stained tissue samples to assist pathologists in grading follicular lymphoma patients. However, these approaches still require pathologists to manually identify centroblast cells and provide refined labels for optimal performance. To address this, we propose PseudoCell, an object detection framework to automate centroblast detection in WSI (source code is available at <https://github.com/IoBT-VISTEC/PseudoCell.git> [Note: The code will be public after the manuscript is accepted]). This framework incorporates centroblast labels from pathologists and combines them with pseudo-negative labels obtained from undersampled false-positive predictions using the cell's morphological features. By employing PseudoCell, pathologists' workload can be reduced as it accurately narrows down the areas requiring their attention during examining tissue. Depending on the confidence threshold, PseudoCell can eliminate 58.18–99.35% of non-centroblasts tissue areas on WSI. This study presents a practical centroblast prescreening method that does not require pathologists' refined labels for improvement. Detailed guidance on the practical implementation of PseudoCell is provided in the discussion section. Centroblast cell Identification, Stain Normalization, Hard Negative Mining, Undersampling, Convolutional Neural Network. § INTRODUCTION Follicular lymphoma (FL) is the second most prevalent lymphoid malignancy in Western and Asian countries. It is responsible for 5-35% of non-Hodgkin lymphoma (NHL) <cit.>. Most FL carries the translocation t(14;18), which causes the overexpression of the BCL-2 protein. FL patients usually present with lymphadenopathy, infrequent B-symptoms, systemic fever symptoms, night sweats, and weight loss. The progression of a disease can be predicted using a combination of clinical and laboratory findings, as well as the histopathological grade of the disease <cit.> Currently, the WHO classification system is the standard for FL grading. The grading system is identified by the number of large neoplastic cells in a tissue sample, known as centroblast cells (CB). In the conventional approach, pathologists rely on manual counting of these CB in tissue samples stained with hematoxylin and eosin (H&E) under a microscope. However, this procedure is time-consuming, laborious (<ref>), and subjective because of differences in the expert's experience. This results in high inter- and intra-observer variability, 61-73% <cit.>. The high variability is causing the data to be susceptible to sampling bias, hard to reproduce, and directly impacting patients' clinical management since there is a lack of consensus among pathologists <cit.>. Therefore, enhancing the precision, reliability, and reproducibility of histological grading is highly significant. Numerous studies have proposed automated methods to localize and classify FL by using whole-slide images (WSI), scanned images from the tissue samples, aiming to facilitate the work of pathologists. The techniques can be categorized into two groups: 1) machine learning (ML)-based approaches with human-engineered features <cit.> and 2) deep learning (DL)-based approaches <cit.>. Still, the first approach is feeding handcrafted features into ML, which tends to be overfitting, has high false-positive (FP) prediction, and is difficult to generalize <cit.>. Because the performance of the model heavily depends on the combination of features they use. Therefore, many studies have used DL-based methods to eliminate the need for hand-engineer features and extract essential features from the training dataset. DL-based models, especially Convolutional Neural Network (CNN), have been recently applied to detect and classify lymph nodes on H&E-stained WSIs. To detect lymphocytes in breast cancer (BC), Liu et al. <cit.> addressed the tumor class imbalance problem by applying random sampling and data augmentation on patches (i.e., cropped images from WSI) before training the InceptionV3 <cit.>. Their method had the best sensitivity on the Camelyon16 dataset. To obtain a robust model in a new cohort, Lu et al. <cit.> proposed an automatic pipeline employing cascade training on U-Net <cit.>. The pipeline is an iterative training process where the model is finetuned on the new cohort using its predicted lymphocyte mask, which was evaluated and refined by pathologists. Two iterations of cascade training were repeated to produce a model with an F1-score of 0.927. However, obtaining the refined mask also increases the workload of experts, contradicting the intended reduction in pathologists' workload. Comparing FL and BC, most DL studies in FL WSIs have focused on patch-level classification (i.e., whether patches contain CB), resulting in lower interpretability to grade FL. In 2019, Somaratne et al. <cit.> developed the one-class training approach to minimize the generalization gap between two FL datasets by combining several images from the target set into the training set. Then apply transfer learning to AlexNet <cit.> on the new training set. The transfer learning model improves patch classification accuracy at the patch level by 12% over the model that trains from scratch. In 2020, Syrykh et al. <cit.> used a CNN-based model to differentiate between FL and follicular hyperplasia (FH) at four different resolutions in histopathology slides. It resulted that the model with the highest resolution achieved accurate patch-level classification. However, this study also showed that the performance of the DL-based approaches is sensitive to the pre-processing of histopathology slides, as evidenced by the drop in the area under the curve (AUC) from 0.92-0.99 (internal dataset) to 0.63-0.69 (external dataset). The DL approach still needs stain normalization (SN) in pre-processing phase. § MOTIVATION AND CONTRIBUTION According to the limitations mentioned above: (1) Deep Learning (DL) is sensitive to the variation of stain color in WSIs; (2) refined labels from experts are required to improve the model during training; and (3) class imbalance between the CB and non-CB classes appears in the dataset. These limitations restricted DL's improvement on FL WSIs to cell-level prediction. To overcome these limitations, we proposed a framework called PseudoCell to explore the feasibility of DL-based object detection models on CB detection tasks. We aim to use the state-of-the-art object detection model, YOLOv8 <cit.>, as our backbone model. Firstly, we compare the consistency of two Stain Normalization (SN) methods on our dataset to prevent the effect of color variation from WSI. Secondly, the need for expert refined labels during training will be imitated through the hard negative mining technique (HNM) <cit.>, i.e., retrieving false-positive (FP) predictions from the trained model, afterward incorporating them into the training set as pseudo-negative labels (non-CB class), and training a new model. Since the number of pseudo-negative labels is higher than the number of CB labels from pathologists, the imbalance class issue must be addressed before incorporating pseudo-negative labels. Thirdly, three distinct undersampling approaches were explored to mitigate the class imbalance issue before incorporating pseudo-negative labels into the training set. To our best knowledge, HNM was initially introduced in the field of computer vision and has yet to be utilized in the context of digital histopathological image recognition. While previous work on cancer cells seeking refined labels from experts to enhance the model, we instead attempted to imitate it through the HNM. This framework allows us to improve the model autonomously without relying on additional work from pathologists. Therefore, the comparison between different HNM approaches was mainly investigated. Lastly, we have provided a practical guideline based on high-power field selection and CB identification in WSI for the practical application of our PseudoCell as a pre-screening tool for FL patients. Integrating this framework with histopathological workflow can reduce experts' workload by narrowing down the region experts focus on while examining the tissue. Other potential real-world applications (such as quality control, training, and education tools) are also discussed for the benefit of human-machine collaboration. § METHODS §.§ Data collection This study included 75,245 patches (512x512 pixels) of Follicular Lymphoma (FL) admitted for treatment at the Faculty of Medicine Siriraj Hospital between 2016 and 2020. No significant correlation between clinicopathological parameters was observed (data not shown). The Siriraj Institutional Review Board (SIRB) (COA no. Si973/2020) has approved the procedures for obtaining and using tissue. Formalin-fixed paraffin-embedded (FFPE) tissue samples with a thickness of 3-5 microns were prepared for automated hematoxylin and eosin (H&E) staining and scanned at a resolution of 0.12 microns per pixel using a 3Dhistech Panoramic 1000 microscope with a 40x objective lens. The resulting images were saved in NRXS format. From a total of 75,245 patches, 1203 patches contain Centroblast (CB) cells, and 3045 patches without CB were selected and annotated by a consensus of two doctors (one of them is a pathologist). The annotation is manually drawn around CB as a bounding box (bbox), <ref>(a). §.§ The Proposed Framework Based on the challenge of CB cell detection, we proposed a framework in <ref>(b)-<ref>(d) that gives reproducible cell-level predictions. Our proposed framework comprises three parts: 1) Train original model, 2) Hard Negative Mining pipeline, and 3) Train model with negative pseudo label. §.§.§ Train original model As shown in <ref>(b), three steps comprise this part to obtain an one-class dataset and a CB detection model: 1.1) Stain normalization selection; 1.2) Data preprocessing; 1.3) Model training. paranum[subsubsection] Stain normalization selection: Even though our WSIs came from the same lab and scanner, the WSIs still have the variation in stain colors. So Stain normalization is applied to our preprocessing step. Stain normalization (SN) is the color distribution transformation from a source image I into a target image I^'. The transformation can be described through the operation I^'=f(I, θ) where θ is a collection of parameters derived from the template image, and f is the function that maps the visual appearance of a given image I to the template image. Generally, θ is designed to capture the color information of the primary stain components (e.g., hematoxylin and eosin). Consequently, stain-normalized images will have a color distribution similar to the template image <cit.>. In this work, we consider two state-of-the-art SN methods: * Structure Preserving Color Normalization (SPCN): Vahadane et al. proposed in <cit.>, which tackles the stain separation problem with the assumption that stain density is non-negative, and the color basis is sparse. The sparseness constraint reduces the solution space of the color decomposition problem. Then, the color basis of a source image is replaced with those from a template image while maintaining its original stain concentrations. * Deep convolutional Gaussian mixture models (DCGMM): Zanjani et al. proposed in <cit.>. This method first converts the source image into the HSD color system. Then fits a GMM to the color distribution individually per tissue class. To train the DCGMM, E-step and M-step of the EM-algorithm are replaced by gradient descent and the back-propagation algorithm. The advantage of this approach is that it does not need any assumptions about the H&E image content. We conduct an experiment, detailed in Section <ref>, to compare and select the most appropriate SN method for our dataset (i.e., one that produces processed images with low color variation and minimal background error). Data preprocessing: Due to the considerable human errors during annotation, label cleaning was necessary before feeding data into the model. The two most prevalent errors in our dataset are 1) bbox annotations with zero areas and 2) repeated bbox annotations on a single CB cell. Since the annotator may have accidentally generated a bbox with zero areas by clicking the mouse, we removed all bbox annotations with zero areas from our dataset. Regarding the second error, we first calculated the center of each bbox and then retrieved the groups of bounding boxes whose center-to-center distance is within a constant. If bbox annotations share the same CB cell, we select the bbox that best fits the cell based on manual inspection of each bbox group. Then apply the stain normalization method from the previous experiment to the annotated positive patches in order to standardize the color variation on our dataset. Lastly, 80%, 10%, and 10% of the normalized positive patches were separated into train, validation, and test sets to create dataset D_1. Model training: Before feeding the training set into the model, five augmentation methods (flip up-down, flip left-right, rotate 90 degrees, rotate 180 degrees, and rotate 270 degrees) were applied to the training set. YOLOv8 (architecture: X6) was trained and validated using 10-fold cross-validation on the augmented dataset D_1 with default hyperparameter configuration. Stochastic gradient descent (SGD) was applied to reduce cross-entropy loss. The model was trained for a maximum of 500 epochs, with early stopping to terminate training when the validation loss stopped improving. Ultimately. We eventually obtained the original ori model. §.§.§ Hard Negative Mining Pipeline In histopathological image recognition, pathologists typically annotate only target cells (i.e., CB cells) and leave other cells unannotated to minimize the annotation cost. It causes DL-based models to typically perform poorly due to many false-positive (FP) predictions. We hypothesize that distinguishing CB cells from other cells that look like CB cells (non-CB cells) is the key to improving the model. One approach is incrementing the non-CB labels as a new class in the dataset. In practice, we retrieve the FP bbox (i.e., non-CB annotation) from the ori model inference on the training set and add them to the training set as a new class. As shown in <ref>(c), the following three steps were employed to generate a dataset with pseudo-negative labels: 2.1) Retrieve FP predictions; 2.2) Undersample; and 2.3) Combine the non-CB class with the training set. paranumPPP[subsubsection] Retrieve FP predictions: To obtain FP predictions, we let ori infer the training set in each fold using a confidence threshold of 0.001 to ensure that the model predicts all negatives. From the training set, the long side of CB bounding box (bbox) is smaller than 100 pixels, so we filter out the FP bbox with a box side greater than 100 pixels. Undersample: Since the number of FP predictions is still greater than that of CB, directly adding negatives into the training set will result in an imbalance class problem. We consider two undersampling strategies to prevent the imbalance issue: Random undersampling and Neighborhood-based Recursive search undersampling (NB-REC) <cit.>. * Random undersampling is a popular non-heuristic technique due to its simplicity of application. Despite its simplicity, there is a significant disadvantage that must be considered. Given that balanced class distribution is a stopping criterion, random undersampling may eliminate potentially useful samples in order to achieve this balance <cit.>. * NB-Rec eliminates the majority class sample, which may overlap with minority class. As described in Algorithm <ref>, the majority sample is considered overlapping when it is in the neighborhood of more than one minority sample. Since the NB-Rec uses K-Nearest Neighbor (KNN), we must search for k prior to execution in order to produce a number of negatives approximately equal to CB. Given that the coordinate is necessary for NB-Rec undersampling, we must extract features from both the ground truth bbox and the FP bbox. The width and height of the bbox were extracted directly from the samples. Six morphological features, described in <cit.>, were calculated using a binary image of each cell in bbox segmented by a trained HoverNet model based on the PanNuke architecture provided in <cit.>. As depicted in <ref>(c) by the red, blue, and purple paths, we obtain three sets of undersampled FP predictions: (1) the set from random undersampling, (2) the set from applied NB-Rec undersampling to bbox width and height, and (3) the set from NB-Rec undersampling applied to the first- and second-principal components of bbox width, bbox height, and six morphological features using the Principal Component Analysis (PCA) method. Combine the non-CB class with the training set: At each path from the previous step, the undersampled FP predictions are added to the training set of dataset D_1. Similar to D_1, the new three independent datasets with pseudo-negative labels contain two classes (CB and non-CB) in the training set, but only one class (CB) in the validating and testing sets. §.§.§ Train model with pseudo-negative label As shown in <ref>(d), similar to the part 1) Training original model, we use the same setup model training step on the dataset of red, blue, and purple paths to get model hnm_random, hnm_box, and hnm_morph, respectively. §.§ Evaluation metrics §.§.§ Stain normalization evaluation In Experiment I, we compare two stain normalizations (i.e., SPCN and DCGMM) in intensity, hue, and color error. Normalized Median Intensity (NMI) <cit.><cit.> is a popular metric that quantifies the intensity variation of an image population, especially the color constancy of the nuclei. NMI is defined as <ref> where A(i) is the average values of Red, Green, and Blue channels at the pixel i in RGB image I, and P_95 is the 95th percentile. NMI(I) = Median_i ∈ I{A(i)}/P_95{A(i)} Normalized Median Hue (NMH) <cit.> is similar to NMI but looks at the consistency of hue and is defined by replacing the A(i) with H(i) which is the value of hue-channel at the pixel i in HSV image I. The standard deviation (SD) and the coefficient of variation (CV)—standard deviation divided by mean—of NMI and NMH were computed to indicate the relative dispersion of measures around the mean of each image population. The lower CV indicates a lower dispersion. To measure the error in the background of processed images, Absolute Mean Color Error (AMCE) <cit.> is applied on lαβ (decorrelated) color space for both α and β channels, which are given in <ref> and <ref>, respectively. Where μ is the local mean, α_i(tar) is the value of target image at local window i in α channel, α_i(proc) is the value of processed image at local window i in α channel, W is the total number of windows, and β channel for β_i(). AMCE_α = |1/W∑_i=1^Wμ(α_i(tar)) - 1/W∑_i=1^Wμ(α_i(proc))| AMCE_β = |1/W∑_i=1^Wμ(β_i(tar)) - 1/W∑_i=1^Wμ(β_i(proc))| §.§.§ Object detection evaluation In Experiment II, the performance of models is evaluated with a certain Intersection over Union (IoU) threshold. IoU can be denoted as follows: IoU_i = P_i∩ G_i/P_i∪ G_i Where P_i is the i bounding box (bbox) predicted by the model and G_i is the corresponding bbox in the ground truth. If the IoU of a predicted bbox is greater than a 0.5 IoU threshold, the predicted bbox is classified as a true positive (TP). Otherwise, the bbox is classified as a false positive (FP). If the model does not detect the region in the ground truth, the ground truth is classified as a false negative (FN). It should be noted that if more than one predicted bbox matches the same reference bbox, the predicted bbox with the highest IoU is chosen as the TP, and the others are excluded from validation. These three elements (i.e., TP, FP, and FN) allowed us to determine precision (P) and recall (R). The average precision (AP) depicts the trade-off between precision and recall at different thresholds, defined as <ref>, which is the area under the precision-recall curve at different thresholds. The mean average precision (mAP) is the average of each class-specific AP score. Since all models in this work perform single-class prediction, the AP and mAP are equivalent. AP = ∫_0^1 P(R) dR §.§ Experiment setup: All experiments were performed with an NVIDIA Tesla V100-SXM2 graphic card.  §.§.§ Experiment I: Stain Normalization Selection This experiment compares the color consistency of our dataset after applying stain normalization methods (SPCN and DCGMM). We experiment with a template image that an expert prefers from all patches. Then apply both SN methods to the remaining images. The normalized images were evaluated using metrics in Section <ref>. The best SN method will be used in the pre-processing phase of this work. §.§.§ Experiment II: Backbone model evaluation This experiment aims to compare the performance of models from different training approaches (i.e., conventional and HNM approaches) on both object detection and image classification tasks. We use the training pipeline described in Section <ref> to obtain four models: * Original (ori) model: Conventional object detection approach with one class annotation. * Model trained with random HNM (hnm_random): Randomly add FP samples, from ori prediction on the training set, into the training set as a new class. Then trains the model with the same setup as ori. * Model trained with HNM of bbox features (hnm_box): Instead of randomly sampling, this approach undersamples the FP samples using NB-Rec on the width and height of the FP bounding box, then adds them into the training set. * Model trained with HNM of morphological features (hnm_morph): Similar to hnm_box but using NB-Rec on first- and second-principal components from six morphological features and width and height of FP bounding box. We use metrics from Section <ref> to evaluate the performance models on object-level prediction. For the image classification task, the object-level prediction was mapped into the image classification using the following criteria: "For any patch, if a CB prediction exists in cell-level prediction, the patch is classified as a positive patch. If not, the patch was classified as a negative patch."  Since the test set of each fold contains 120-121 images with CB (i.e., positive images), we evaluate model performance on the test set of each fold with the additional negative image from our database by randomly selecting. § RESULTS §.§ Experimental Results §.§.§ Experiment I: Stain normalization Selection Deep convolutional Gaussian mixture models (DCGMM) yielded the lowest standard deviation (SD) and coefficient of variation (CV) for both Normalized Median Intensity (NMI) and Normalized Median Hue (NMH) metrics, as indicated in <ref>. Since NMI qualifies the color consistency of the nuclei <cit.> and NMH quantifies the global color variation of an image population <cit.>. Thus, the results indicate that DCGMM provides qualitatively similar color distributions for nuclei with less color variation within the image population (see <ref>). Comparing the original and the Structure Preserving Color Normalization (SPCN), the box plots in <ref> demonstrate that DCGMM has the smallest spread of NMI and NMH values around the median (inter-quartile range) with variance statistical significance (p < 0.01). Next, we evaluate the background error of each image population. DCGMM has a significantly higher mean Absolute Mean Color Error in α space (AMCE_α) than the original image population. For β space (AMCE_β), DCGMM does not have statistical significance compared to the original. It suggests that the DCGMM-processed images contain more or equal background errors than the original images, contradicting the goal of reducing color variations. Even though SPCN does not have statistical significance with the original at AMCE_α, at AMCE_β, SPCN provides significantly less error than the original. Moreover, both values of SPCN are statistically significantly less than DCGMM. Therefore, we decided to implement SPCN in our framework pipeline, as it offers lower SD and CV in NMI and NMH values than the original and better AMCE values for both α and β spaces than DCGMM. §.§.§ Experiment II: Backbone model evaluation This experiment benchmarked models trained using conventional and HNM approaches for object detection and image classification tasks. Before comparing the performance of the models, we first investigated the optimization process of each model during training. As indicated in <ref>, it was observed that the validation loss of original (ori) model reached the lowest value when converged. Other models (i.e., all HNM approaches) exhibited a similar pattern of converging more rapidly than the ori model, albeit with a higher loss. Despite the model trained with HNM of morphological features (hnm_morph) going with the same trend as other HNM approaches, it achieved the highest performance in terms of mean average precision (mAP@50) and accuracy, as shown in <ref> and <ref>, respectively. For the object detection task, <ref> provided an overview of model performance on each metric with a confidence interval. All models follow the same trend. The hnm_morph achieved slightly better sensitivity, but in contrast, the trade-off appears to be on precision. Nevertheless, its performance is superior to other models in mAP@50 and mAP@50:95. For the image classification task, the hnm_morph outperformed all other models, especially to the model trained with random HNM (hnm_random). Notice that the performance of hnm_random dropped from ori on all metrics. In contrast, hnm_morph, which was trained with the same approach but with a more reasonable undersampling method, improved its performance over the ori. § DISCUSSION §.§ Effect of Training with Pseudo-Negative Labels on the Model Performance Concerning the impact of training with pseudo-negative labels from the hard negative mining (HNM) technique over the conventional training approach, we demonstrated improvements in centroblast (CB) detection and patch classification. However, the validation loss of these models is higher than that of the conventional training method (<ref>). This is because the softmax function divides the total probability mass across multiple classes (i.e., CB and non-CB) rather than just one class of CB. When computing each class's confidence score in the bounding box using the softmax function, the number of divisors is increased to two, resulting in the confidence scores for each class becoming smaller on average as the number of classes increases. Finally, the class loss of the YOLO model, which is the cross-entropy loss, in models trained with HNM is higher than in the ori model. §.§ Effect of Undersampling approaches on the Model Performance Since our framework was designed to imitate the training loop with the pathologist's refined labels, which identifies CB based on cell color and morphology, we retrieved false-positive samples and fed them to the model. We obtain the following models based on three undersampling approaches: hnm_random, hnm_box, and hnm_morph. As the result of object detection and image classification tasks, the model with the morphological features (hnm_morph) performs best. Suggest that the design of the undersampling approach is essential to take advantage of the HNM technique and that pathologists' intuition still provides some information for deep learning in order to distinguish between non-CB cells and actual CB cells. §.§ Guidance for Clinical Implementation and Future Works In conventional histopathological workflow, pathologists count the number of centroblasts (CB) in ten randomly selected high-power fields (HPF), leading to high inter- and intra-observer variability and being vulnerable to sampling bias. The inter- and intra-observer variability among pathologists is crucial since it directly impacts patient grading and management <cit.>. In order to reduce the variability between pathologists, a solid guideline for finding potential HPF in WSI is one solution. With PseudoCell framework, pathologists will obtain two guidelines (i) heatmap visualization for potential CB regions in WSI and (ii) CB annotations at HPF level for identifying CB. Pathologists' remaining job is to select the HPF and then accept the annotation or self-identify CB cells. Pathologists are only required to set the confidence threshold (conf_thres), which can range from zero to one, when using PseudoCell. The conf_thres parameter determines the initial confidence level of CB annotations reported to pathologists. A low conf_thres (conf_thres = 0.2) produces a dense heatmap in <ref>, whereas a high conf_thres (conf_thres = 0.8) produces a sparse heatmap that more closely resembles the expert-annotation. We will divide the histopathological process into HPF selection phase and CB identification phase. In each phase, the real-world adjustment of conf_thres to facilitate pathologist preference could take the form of the following suggestion: §.§.§ HPF selection phase with a high conf_thres, PseudoCell offers a sparse HPF that is still sufficient to grade FL, which is suitable for pathologists who wish to complete the grading task rapidly. In contrast, when conf_thres is low, PseudoCell generates a dense heatmap that identifies the region containing intensive CB and regions with less CB. This approach is suitable for pathologists who wish to determine the HPF independently. §.§.§ CB identification phase A high conf_thres is advantageous for pathologists who prefer to self-identify on CB with some framework-suggested CB annotation. In contrast, a low conf_thres will enable the framework to recommend more CB annotation, which is ideal for pathologists who wish to check off the annotation. The pathologists' workload can be reduced by PseudoCell accurately narrowing down the areas requiring their attention during examining tissue as in <ref>. From all 24,757 patches with tissue in WSI, the framework highlights 10,353 and 161 patches that contain potential CB candidates based on conf_thres with an inference time of approximately 0.03 seconds per patch. In other words, the framework can eliminate 58.18 to 99.35% of all WSI patches that do not appear to be CB candidates at the conf_thres. Pathologists can therefore focus on identifying CB on the slide. We anticipated that inter- and intra-variability of pathologists would decrease after implementing our framework in the real world. In contrast, the machine benefits from pathologists providing actual CB as refined labels. These labels can be used to improve the model's performance in the future. This cycle leads to human-machine collaboration in the real world, which is one of the objectives of this work. Pseudocell can also provide a second opinion that offers additional safety for patients and instills greater confidence in doctors, enhancing their efficiency and reducing the likelihood of errors. For instance, when there is a need to distinguish between an infection and follicular cell lymphoma, particularly in its early stages, Thai pathologists, who may already be handling a heavy workload, could potentially issue a false negative, especially when the pathological area is small. Integrating PseudoCell into the histopathological workflow offers several benefits. Firstly, the model assists pathologists by highlighting regions or suggesting potential CB cell candidates within the tissue, thereby narrowing the examination focus. It serves as an additional quality control mechanism, flagging areas that may contain CB cells and assisting pathologists in not overlooking significant findings, thereby reducing diagnostic errors. In addition, pathologists can use the model's predictions as a benchmark to compare and contrast their observations. This iterative process improves their skills in order to recognize centroblast cells, thereby enhancing diagnostic precision over time. Incorporating PseudoCell contributes to improved efficiency, quality control, and training and education for identifying centroblast cells in histopathology. Lastly, the implementation of the PseudoCell framework indeed represents a cost-effective investment. When compared to the recurrent expenses associated with sustaining a team of pathologists – including salaries, benefits, and training costs – the financial outlay required to incorporate and maintain this AI model is comparatively minimal. Moreover, the application of this technology can drastically boost efficiency by expediting the process of identifying centroblasts, thereby allowing pathologists to concentrate on more intricate tasks. This surge in efficiency can, in turn, decrease operational costs over time, as a speedier diagnosis may result in reduced lab usage and quicker patient turnover. In future work, if there are additional object detection models or updated versions, the PseudoCell framework permits their implementation by modifying the backbone model. Furthermore, dealing with data limitations and model transparency is crucial for pathologists to understand and have confidence in the model's decision-making. Combining weakly supervised paradigm (e.g., MIL or Attention Map) with explainability techniques (e.g., LIME, SHAP, and CAM) is a promising next step to investigate. § CONCLUSION In conclusion, our study introduces the PseudoCell framework for centroblast (CB) cell detection, which enhances the performance of the backbone model by using false-positive samples from the Hard Negative Mining (HNM) method as pseudo-negative labels. PseudoCell effectively distinguishes between actual CB and non-CB cells in patches from whole-slide images (WSI). Our experiments and evaluations demonstrate that model training from HNM on Neighborhood-based Recursive search undersampling using morphological features achieves the best results in CB detection and patch classification tasks. PseudoCell can reduce pathologists’ workload by accurately identifying tissue areas requiring attention during examination. Depending on the confidence threshold, PseudoCell can eliminate 58.18-99.35% of non-CB tissue areas on WSI. Furthermore, PseudoCell can serve as a second opinion to differentiate between infection and follicular cell lymphoma, particularly in the early stages, making it cost-efficient for quality control and educational purposes in CB recognition. This study presents a practical centroblast prescreening method that does not rely on pathologists’ refined labels for improvement. It suggests the potential for human-machine collaboration in CB identification, alleviating the burden on clinicians by focusing their labeling efforts on regions suggested by PseudoCell, rather than manual labeling as conventionally done. § ACKNOWLEDGMENT The authors wish to thank Surat Phumphuang for her coordination in operating the research. IEEEtran
http://arxiv.org/abs/2307.01846v1
20230704174946
Grad-FEC: Unequal Loss Protection of Deep Features in Collaborative Intelligence
[ "Korcan Uyanik", "S. Faegheh Yeganli", "Ivan V. Bajić" ]
eess.IV
[ "eess.IV", "cs.CV" ]
Boundary Flat Bands with Topological Spin Textures Protected by Sub-chiral Symmetry Zhongbo Yan August 1, 2023 =================================================================================== Collaborative intelligence (CI) involves dividing an artificial intelligence (AI) model into two parts: front-end, to be deployed on an edge device, and back-end, to be deployed in the cloud. The deep feature tensors produced by the front-end are transmitted to the cloud through a communication channel, which may be subject to packet loss. To address this issue, in this paper, we propose a novel approach to enhance the resilience of the CI system in the presence of packet loss through Unequal Loss Protection (ULP). The proposed ULP approach involves a feature importance estimator, which estimates the importance of feature packets produced by the front-end, and then selectively applies Forward Error Correction (FEC) codes to protect important packets. Experimental results demonstrate that the proposed approach can significantly improve the reliability and robustness of the CI system in the presence of packet loss. Collaborative intelligence, deep feature transmission, Grad-CAM, loss/error resilience firstpage § INTRODUCTION The deployment of the Internet of Things (IoT) infrastructure offers numerous opportunities for innovative applications that rely on deep neural networks (DNNs) to process the acquired sensor data. However, due to the limited computational and energy resources of edge devices, researchers are investigating ways in which DNN-based analysis of sensory signals acquired at the edge can be most effectively realized. One of the promising strategies is Collaborative Intelligence (CI) <cit.>, which utilizes both edge and cloud resources to enhance the speed and efficiency of DNN computing <cit.>. Typically, CI involves dividing the DNN between an edge device and the cloud, where the edge device runs the DNN front-end and computes features that are then sent to the cloud to be processed by the DNN back-end. Due to the imperfections of real communication channels, transmitting the intermediate feature tensor can result in bit errors at the physical layer and subsequent packet loss at the transport or application layer <cit.>. As a result, error/loss resilience strategies should be incorporated into CI system design to enable effective and accurate analysis of the sensed signals. The error/loss resilience in CI is a relatively unexplored topic. Joint source-channel coding of deep features for the simple binary symmetric channel and the binary erasure channel has been considered in <cit.>. For transmission over packet networks, several loss concealment approaches have been proposed in <cit.>. However, there is still limited understanding of the relevant trade-offs and a lack of design guidelines for error/loss resilience in CI. In this paper, we propose a novel approach to improve the resilience of a CI system to packet loss by using Unequal Loss Protection (ULP) for deep feature transmission. A crucial question to answer in this context is – how can we tell which features are more important than others? To answer this question, we employ a well-known method from the domain of explainable AI, namely Gradient-weighted Class Activation Mapping (Grad-CAM) <cit.>. For a given input image, Grad-CAM estimates the importance of each pixel according to its contribution to making the correct inference decision. We adjust Grad-CAM to estimate the importance of features being transmitted, rather than the input. However, even such a modified Grad-CAM is not realizable in the context of CI, because the edge device – where these estimates need to be made – does not have access to the inference decision. Hence, we develop a proxy model for Grad-CAM, which can approximate this estimate by observing only the input image. In summary, our contributions are as follows: * We demonstrate that a Grad-CAM-like approach can reliably estimate the importance of features being transmitted in a CI system. * We develop a model that can approximate Grad-CAM estimates without access to the inference decision. * We show that a ULP approach based on such importance estimates is capable of providing significant loss resilience to a CI system based on ResNet-50 <cit.>. The paper is organized as follows. Section  <ref> briefly reviews the related work on loss/error resilience in CI. The proposed methods – feature importance estimation and unequal loss protection – are described in Section <ref>. The experimental results are presented in Section <ref>, followed by the conclusions in Section <ref>. § RELATED WORK Choi et al. <cit.> developed a neural network for joint source-channel coding of intermediate features for discrete channels based on the maximization of mutual information between the image source data and the noisy latent codeword. Specifically, in this approach, the binary symmetric channel (BSC) and binary erasure channel (BEC) are considered. Another work <cit.> designed an end-to-end trainable architecture called BottleNet++ for compressing and transmitting DNN features over a BEC or an Additive White Gaussian Noise (AWGN) channel. It is noted that the aforementioned approaches are targeted at improving feature transmission robustness against bit errors; in other words, they are physical-layer techniques. Often, physical-layer bit errors manifest themselves as packet loss at the application layer. In <cit.>, well-known tensor completion methods, namely simple low-rank tensor completion (SiLRTC), and highly accurate low-rank tensor completion (HaLRTC), were used to recover missing data in the deep feature tensor. Moreover, an approach called adaptive linear tensor completion (ALTeC) was developed, which was much faster and as accurate as SiLRTC and HaLRTC, but it required pre-training for a specific DNN backbone. In <cit.>, another approach called content adaptive linear tensor completion (CALTeC) was developed based on estimating a linear relationship between missing and available features, which did not require pre-training for a specific DNN backbone. Another backbone-agnostic approach for missing feature recovery was developed in <cit.> using the concept of inpainting. Our focus on this paper is on ULP which, to our knowledge, has not been studied in the context of CI. However, ULP was a popular topic in image and video communication research <cit.>. In that line of research, a core question was how to decide which parts of the image/video are more important than others, so that they could be adequately protected. In this paper, we ask the same question for deep features being transmitted, and we offer a way to answer it. § PROPOSED METHOD §.§ System overview The proposed approach is depicted in Fig. <ref>. The objective is to mitigate the impact of packet loss during feature transmission from the edge device to the cloud over a packet loss channel by using Forward Error Correction (FEC) to provide ULP. The edge device processes the input image X and generates a deep feature tensor χ∈ℝ^h× w × c, where h, w, and c represent the width, height, and the number of tensor channels, respectively. The deep feature tensor is then 8-bit quantized and packetized for transmission over the packet network. We adopt the packetization scheme from <cit.>, where r consecutive rows from a given tensor channel form a packet, as illustrated in Fig. <ref>. Let p_i^(j) be the set of (x,y) coordinates of the i-th packet in the j-th tensor channel. We will interchangeably refer to p_i^(j) as a packet and the packet's location in the tensor. In order to minimize the impact of packet loss, we employ an importance estimator to estimate the importance of each element in χ. The importance estimator produces a tensor ℐ of the same dimension as χ, where each element of ℐ is an estimate of the importance of the corresponding element in χ. Using ℐ, FEC is assigned to the feature packets that contain the most important features, and then the feature packets and FEC packets are sent to the cloud. Upon reception, the lost feature packets are recovered to within the erasure correction capability of the FEC code <cit.>. The data that could not be recovered is replaced by zeros. Finally, the resulting feature tensor is fed to the DNN back-end to produce the inference output. Our DNN backbone is ResNet-50 <cit.> split at layer , but the overall approach is applicable to other DNNs and split points. §.§ Feature importance estimator The crucial problem in applying ULP to feature transmission is to decide on the relative importance of features. In our case of packet-based transmission, we will have to decide on packet importance. We present two methods for doing so: modified Grad-CAM and proxy model. Modified Grad-CAM: The original Grad-CAM <cit.> produces an importance estimate for each pixel in the input image. Hence, we make a few modifications to enable it to estimate feature importance, rather than input pixel importance. Specifically, we modify Grad-CAM by omitting globally averaging pooled gradients and summation of its channel importance maps. We extract such raw importance maps as a tensor ℐ from the split-point of our DNN, such that it has the same dimension as the feature tensor χ. Importance tensor ℐ effectively provides an estimate of the importance of each feature in the feature tensor χ. From ℐ, we compute the importance score I_i^(j) of packet p_i^(j) as: I_i^(j)=∑_(x,y) ∈ p_i^(j)ℐ(x,y,j). Then the values of I_i^(j) can be used to decide which packets will be protected by FEC. While this approach provides a good estimate of the importance of each packet, as will be seen in the experiments, it is not realizable in practical CI. This is because Grad-CAM requires DNN's output, and passing gradients through DNN back-end, before ℐ can be produced. Yet, in CI, we cannot obtain the DNN output before features are transmitted. Hence, we propose another method to estimate packet importance below. Proxy model: To enable estimating feature importance in a Grad-CAM-like manner, but without access to the DNN back-end or output, we propose to train a small DNN (“proxy model”) to estimates feature importance directly from the input. We take this proxy model to have the same architecture as the DNN front-end, in our case ResNet-50 up to layer . This proxy model is trained to approximate Grad-CAM's output. Specifically, let the proxy model's output be ℐ, then the training loss function is ℒ= MSE(ℐ,ℐ), as shown in Fig. <ref>. Once the proxy model is trained, the packet importance can be computed based on its output as: I_i^(j)=∑_(x,y) ∈ p_i^(j)ℐ(x,y,j). Fig. <ref>(a) shows a portion of the tiled feature tensor χ for an input image of an ostrich. Fig. <ref>(b) shows the corresponding packet importance I_i^(j) produced by Grad-CAM, where each packet occupies r=7 rows of a feature tensor's channel. Fig. <ref>(c) shows the corresponding packet importance estimate I_i^(j) produced by the proxy model. While not equal to I_i^(j), proxy's estimate correctly identifies the cluster of important packets. As will be seen in the results, the proxy model provides almost as good identification of important packets as Grad-CAM, while being fully realizable in CI. §.§ Forward Error Correction We use systematic erasure correction Reed-Solomon (RS) codes <cit.> as the Forward Error Correction (FEC) mechanism. Codewords of such codes are parametrized by two parameters, (n,k), n>k, where n is the total number of symbols and k is the number of data symbols, so that n-k is the number of redundancy symbols. Such codes have the property that the original k data symbols can be recovered from any subset k out of n symbols; in other words, n-k erasures can be corrected. By constructing codewords across packets <cit.>, one can create n-k FEC packets for a group of k data packets such that n-k lost packets from the group of n total packets can be recovered. To apply such FEC in our system, the packets of a given feature tensor are first sorted according to their estimated importance I_i^(j). Then a certain percentage of the important packets are protected by FEC. However, simply adding FEC packets would increase the amount of data to be transmitted. To address this issue, we adopt a strategy of dropping the least important packet for each added FEC packet. Specifically, A% of the least important feature packets are removed and replaced by the same number of FEC packets, which then protect the remaining B% of the feature packets. Obviously, A+B=100, and we refer to such a scheme as FEC_A_B. This scheme provides unequal loss protection (ULP) because some feature packets are protected, some are not, and some are deliberately dropped in favor of FEC packets. § EXPERIMENTS As mentioned earlier, we use the pre-trained ResNet-50 model <cit.> split at layer , such that layers prior to this one form the DNN front-end and subsequent layers form the DNN back-end. To run the experiments, we utilized the same 882 images used in <cit.>, which are a subset of 10 classes of the ImageNet <cit.> test set. The tensor χ produced by the DNN front-end had dimensions of 56 × 56 × 256 and was quantized to 8 bits per tensor element. The tensor was packetized into packets spanning r=7 rows, resulting in 8 packets per tensor channel. The proxy model was trained using the ImageNet <cit.> training set, with 100 randomly selected samples from each of the 1000 classes. The Adam optimizer <cit.> was used, with the initial learning rate of 10^-1 and decreasing to 10^-11 over 16 epochs. In the first experiment, we assess how well the modified Grad-CAM can identify important packets, and how well the proxy model can approximate Grad-CAM. The results are given in Fig. <ref>, which shows the Top-1 classification accuracy as a function of the percentage of packets lost. The dashed curves correspond to importance estimates by the modified Grad-CAM: green curve for losing the least important packets and blue curve for losing the most important ones. Clearly, the modified Grad-CAM is able to separate important from non-important packets; when important packets are lost, the accuracy drops quickly, while when least important packets are lost, the accuracy is maintained even with 60% lost packets. The proxy model, whose performance is indicated by solid curves, is not as good as the modified Grad-CAM, but is still able to identify important and less important packets reasonably well. The next experiment is performed with an independent and identically distributed (iid) packet loss. Specifically, we choose a packet loss probability P_L∈{0.1, 0.2, ..., 1} and run the experiment over the test set 10 times, recording the average Top-1 accuracy. Here, packet importance estimates are provided using the proxy model. In Fig. <ref> we show the results for several FEC_A_B schemes (pink curves) as well as unprotected packet transmission (red curve). We can see that as A increases, the performance improves because the increased amount of FEC protects the most important packets, while the least important feature packets are removed. Using FEC_50_50, we are able to achieve near-lossless performance even with 50% packet loss. This also indicates that ResNet-50 produces fairly redundant features, otherwise maintaining the accuracy would not be possible while losing a large fraction of them. Our proposed method correctly identifies which of the features are important and ensures that those features are protected. § CONCLUSIONS In this paper, we proposed a novel ULP-based approach that charts a new direction to improve the resilience of CI systems in the presence of packet loss. First, we demonstrated that a Grad-CAM-like approach is capable of providing reliable estimates of feature importance, which are needed for ULP. Second, we trained a proxy model for feature importance estimation, which can be deployed in a real CI system. The proxy model does not require access to the DNN output and makes its estimates solely based on the input image. Finally, we showed how the ULP mechanism and the feature importance estimator work together to prioritize the most important packets and minimize the impact of packet loss on the inference accuracy of the distributed DNN model. The resulting scheme significantly extends the range of packet loss rates over which reliable inference accuracy can be provided. IEEEbib-abbrev
http://arxiv.org/abs/2307.01047v1
20230703142404
Cross-modal Place Recognition in Image Databases using Event-based Sensors
[ "Xiang Ji", "Jiaxin Wei", "Yifu Wang", "Huiliang Shang", "Laurent Kneip" ]
cs.CV
[ "cs.CV" ]
Doubly Robust Estimation of Direct and Indirect Quantile Treatment Effects with Machine Learning Yu-Chin HsuInstitute of Economics, Academia Sinica, 128, Section 2, Academia Road, Nankang, Taipei 115, Taiwan. E-mail: . Academia Sinica Martin HuberUniversity of Fribourg, Department of Economics, Bd. de Pérolles 90, 1700 Fribourg, Switzerland. E-mail: . University of Fribourg Yu-Min YenDepartment of International Business, National Chengchi University, 64, Section 2, Zhi-nan Road, Wenshan, Taipei 116, Taiwan. E-mail: . National Chengchi University August 1, 2023 ============================================================================================================================================================================================================================================================================================================================================================================================================================================================================ empty empty Visual place recognition is an important problem towards global localization in many robotics tasks. One of the biggest challenges is that it may suffer from illumination or appearance changes in surrounding environments. Event cameras are interesting alternatives to frame-based sensors as their high dynamic range enables robust perception in difficult illumination conditions. However, current event-based place recognition methods only rely on event information, which restricts downstream applications of VPR. In this paper, we present the first cross-modal visual place recognition framework that is capable of retrieving regular images from a database given an event query. Our method demonstrates promising results with respect to the state-of-the-art frame-based and event-based methods on the Brisbane-Event-VPR dataset under different scenarios. We also verify the effectiveness of the combination of retrieval and classification, which can boost performance by a large margin. § INTRODUCTION Visual place recognition (VPR) with conventional cameras has long been studied in both computer vision and robotics communities. It is critical for both 6DoF localization <cit.> and loop closure detection in SLAM systems <cit.><cit.>, and thus helps to enable immersive experiences in AR/VR and autonomous driving in GPS-denied environments, respectively. VPR is usually formulated as an image retrieval problem <cit.>, a task that searches in an image database to retrieve the most similar ones to a query image. Previously seen places are stored as images and each one of them is assigned a location identifier, which can be used to localize the query place later on. However, most frame-based methods struggle in real-world environments where appearance changes are very common over any period of time (e.g. moving objects, day-night differences, weather, and seasonal variations <cit.>). The emergence of event cameras points out a promising avenue towards solving this problem. Unlike frame-based sensors producing intensity images at a fixed rate, an event camera is a novel bio-inspired vision sensor that asynchronously detects intensity changes at each pixel. Each time the intensity change surpasses a defined level, an event is fired. An event is then represented as a tuple of 2D pixel coordinates, a timestamp, and a polarity indicating the direction of the intensity change. The invariance to scene illumination <cit.> makes them particularly suitable for the VPR task, and several event-based VPR methods <cit.> have been recently proposed to deal with challenging illumination changes. While using events leads to impressive resilience with respect to appearance changes, it is worth noting that the reference database adopted by those event-based VPR methods is only formed by event data captured under highly similar dynamics. If a place is revisited under different dynamics, fired events will change drastically, thus it is not clear whether such methods would still succeed. Moreover, purely event-based methods are of low practical value in terms of downstream applications compared with an image database. For example, large-scale Structure-from-Motion (SfM) <cit.> is often solved upon image databases thereby enabling further 6DoF pose estimation <cit.>. In this work, we bridge the gap between events and real-world images and propose the first cross-modal place recognition pipeline that robustly localizes event queries in a large database of static imagery. Our end-to-end network consists of a global feature extraction backbone to capture image-wide information, a retrieval branch to obtain initial results, and a cross-modal fusion layer followed by a simple MLP-based classifier to further predict the similarity score between two different input modalities (i.e. events and frames). Through extensive experiments on the Brisbane-Event-VPR dataset <cit.>, we demonstrate competitive performance with respect to the state-of-the-art frame-based and event-based VPR approaches. Despite the increased challenge stemming from the cross-model matching, our method still takes advantage of the illumination-resilient property of events and manages to beat frame-based methods in some difficult illumination conditions. We furthermore demonstrate the benefits of the additional cross-modal fusion and classification over the naive implementation of simply applying a retrieval network for cross-modal VPR. The main contributions of this paper are as follows: * We present the first end-to-end pipeline for event-based cross-modal place recognition in image databases. Events possess the advantageous property of being less affected by varying illumination conditions while performing in the traditional image databases easily enables downstream vision tasks such as 6DoF pose estimation. * We combine the weakly supervised representation learning and the fully supervised matching function learning with a cross-modal fusion layer, which is shown to be effective at achieving better results for the VPR task. * We conduct comprehensive experiments on the Brisbane-Event-VPR dataset <cit.> and show that our method successfully handles a variety of different scenarios and demonstrates competitive performance with respect to frame-based VPR methods. § RELATED WORK We start by reviewing some classical frame-based VPR approaches and recently proposed event-based solutions. Then, we will give a brief introduction to bilinear pooling used for cross-modal fusion. §.§ Frame-based Visual Place Recognition Place recognition is a fundamental problem in robotics and computer vision applications. The most seminal contribution to visual place recognition goes back to Video Google <cit.>. This work introduces the concept of histograms over visual words, a global image descriptor for retrieving similar images. The use of the so-called bags of keypoints or words has also been introduced by Csurka et al. <cit.> and extended by <cit.>. Jégou et al. <cit.> and Arandjelovic et al. <cit.> have furthermore proposed aggregation methods to compress local descriptors into compact whole image representations. The community has continued the development of learning-based alternatives to global image representations for place recognition. Examples are given by the works of Arandjelovic et al. <cit.>, Miech et al. <cit.>, and Ong et al. <cit.>, who introduce deep networks named NetVLAD, NetVLAD-CG (NetVLAD with context gating) or NetBOW, respectively. The most recent efforts in using learning-based representations for place recognition focus on coarse-to-fine approaches <cit.> or multi-scale approaches in which local and global features are fused for place recognition-related tasks <cit.>. More detailed advantages and challenges of VPR are well explained by the original work of Lowry et al. <cit.> as well as the recent survey by Garg et al. <cit.>. While a number of successful works have already been presented, it remains challenging for frame-based solutions to deal with varying illumination conditions. §.§ Event-based Visual Place Recognition Owing to the superior abilities of event-based sensors, the community has recently stepped up efforts towards visual localization and SLAM with dynamic vision sensors <cit.>. In order to gain invariance with respect to the generally unknown camera dynamics, Fischer and Milford <cit.> propose to use ensembles of temporal windows of events for place recognition. In their further work <cit.>, they analyze the number of events needed for robust place recognition which thereby implicitly adjusts the length of the temporal window according to the captured amount of information. Recently, the community has proposed an increasing number of network-based solutions for event-based VPR. The most straightforward solution consists of reconstructing images from events <cit.> and appending traditional, image-based VPR. However, this approach is prone to fail as it does not describe the input in an appearance-invariant domain. As an alternative, Lee and Kim <cit.> propose EventVLAD, a network to reconstruct gradient maps that are then fed to a NetVLAD architecture, thereby achieving invariance versus day-to-night illumination changes. However, the method uses events that are simulated using Carla, and no direct cross-modal registration ability between events and a database of real-world images was demonstrated. Hussaini et al. <cit.> propose a method based on Spiking Neural Networks (SNNs), which includes a weighting scheme that down-weights the influence of ambiguous neurons responding to multiple different reference places. While interesting, SNN approaches are currently unable to match the maturity of DNNs. Kong et al. <cit.> use EST voxel grid-based event representation <cit.> appended by ResNet and VLAD layers to generate compact descriptors for short event sequences. The network is trained end-to-end based on the triplet ranking loss. However, it again does not perform direct cross-modal registration but matches short sequences of events that are collected under highly similar dynamics. Registration with respect to a database of static imagery is not possible. Hou et al. <cit.> propose to fuse frames and events to take advantage of two modalities. Their approach consists of attention-based, parallel multi-scale feature extraction from events and images, and subsequent multi-scale fusion and descriptor aggregation. The method involves a heavy network architecture that is unable to run in real time. A similarly heavy network-based approach was presented by Huang et al. <cit.>, who introduce a sequential application of a cross-modality attention module, a self-attention module, and a concluding pooling layer. Their method represents the current state-of-the-art and outperforms both frame and event-based alternatives. To the best of our knowledge, we present the first cross-modal VPR framework that addresses the challenging problem of directly querying events on top of a database of static images. §.§ Bilinear Pooling The simplest approaches for merging modalities are vector concatenation and element-wise sum. To improve the fusion quality, Tenenbaum et al. propose a pooling method, called bilinear pooling <cit.>, which calculates the outer product of two vectors such that elements in each vector can have a multiplicative interaction with another vector. Lin et al. <cit.> apply bilinear pooling to the fine-grained visual recognition task in order to fuse the feature vectors from two CNNs, achieving large improvements on this task. However, the high dimensionality (i.e. n^2) of bilinear pooling limits its use in memory and time-intensive applications. Therefore, Gao et al. <cit.> propose two compact bilinear features based on polynomial kernel analysis, which preserve the same discriminative power but with significantly reduced dimensions. Fukui et al. <cit.> then extend it to the question-answering task for vision and text modalities. In this paper, we also use compact bilinear pooling to combine the feature vectors extracted from images and event frames. This fusion step followed by a binary classification can boost retrieval performance by a large margin. § METHOD We start by providing an overview of our method before going into the details of the event frame preparation and our network-based representation learning. We conclude with details on our matching function learning and the implementation of the framework. §.§ Overview The pipeline of our event-RGB cross-modal VPR framework is shown in Figure <ref>. It can be divided into three parts, namely backbone, representation learning (i.e. retrieval), and matching function learning (i.e. classification). The backbone (E_img or E_event) is shared by two following small sub-networks (E_retr and E_cls) to extract global features for retrieval and classification, respectively. Representation learning is widely used in both frame-based <cit.> and event-based <cit.> VPR methods. It is simple and effective but there is no interaction between the different modalities. To overcome this shortage, we further add matching function learning, which comprises a cross-modal fusion layer (i.e. CBP) and a multilayer perceptron (MLP) to improve on the initial retrieval results. While similar in spirit to a recommender system <cit.>, our framework represents the first VPR work adopting matching function learning to improve over simple representation learning. Next, we will describe the two key components in detail. §.§ Event Frame Generation There are several ways to convert the raw asynchronous event stream into a more suitable representation for later processing, such as event frames <cit.>, voxel grid <cit.> and event spike tensor <cit.>. Given that we aim at cross-modal retrieval in an image database, we choose the most related representation of event frames <cit.> for our events. Specifically, events within a fixed period of time are accumulated into a frame to form an event image, where the intensity value is represented by the spatial distance between the pixel and its nearest event. Such representation reduces the motion dependency of the event and preserves the dense texture information of the scenario. Furthermore, denoising and filling techniques are applied to the event image to enhance the quality and stability of the representation. §.§ Network Backbone Our network is built upon VGG16 <cit.>, a deep convolutional neural network architecture for large-scale image classification tasks. Here we use it as our backbone to extract high-level information from both event frames and RGB images. The output feature maps are then fed into two small networks to generate global feature vectors for subsequent retrieval and classification, respectively. Let f_θ denote the backbone. We have F_i = f_θ(I_i), where I_i is either an event frame or an input RGB image. §.§ Representation Learning We map the input images and events into the representation space, where they can be directly compared using distance metrics. Global Feature Extraction for Retrieval. We put a NetVLAD layer <cit.> after a small sub-network E_retr such that it aggregates mid-level feature maps into a powerful feature vector encoding the global image-wide information. This sub-network only consists of three convolutional layers, and we denote it f_ϕ. The output for retrieval is then given by F_i^retr = NetVLAD(f_ϕ(F_i)) ∈ℝ^(K× D)× 1, where K is the VLAD parameter indicating the number of cluster centers, and D is the dimension of local features extracted by CNNs. Triplet Loss. Towards the goal of matching query events in image databases, a training triplet {I_a, I_p, I_n} containing an anchor event frame I_a, a positive image I_p, and a negative image I_n, is processed by the retrieval branches in a single forward pass. Thus, the resulting features are as follows {F_a^retr, F_p^retr, F_n^retr}. The triplet loss is then given by L_triplet = max(d(F_a^retr, F_p^retr) - d(F_a^retr, F_n^retr) + α, 0) where d(·) is the Euclidean distance and α is a margin between positive and negative pairs. §.§ Matching Function Learning We fuse the triplet features for classification in a pair-wise manner by means of Compact Bilinear Pooling (CBP) <cit.>, and then successively feed the fused features into an MLP-based classifier to predict the similarity score. Thus, a complex matching function is learned by the MLP under strong supervision signals. Intermediate Feature Extraction for Classification. The sub-network architecture for classification is quite similar to the one for retrieval. The difference is that E_cls has a fully connected layer following the CNNs to produce a lower-dimensional feature vector. Let f_ω denote the sub-network, then we have F_i^cls = f_ω(F_i). Cross-modal Fusion. The CBP algorithm (see Figure <ref>) first projects the feature vector F_i^cls∈ℝ^n to F̂_i^cls∈ℝ^m using the Count Sketch projection function Ψ <cit.>, i.e. F̂_i^cls = Ψ(F_i^cls). Then, the fusion of two feature vectors can be formulated as F̂_ij^cls = FFT^-1(FFT(F̂_i^cls) ⊙FFT(F̂_j^cls)), where ⊙ refers to element-wise product, and FFT and FFT^-1 represent the Fast Fourier Transform and its inverse, respectively. As for the training triplet, we construct the anchor-positive pair {F_a^cls, F_p^cls} and the anchor-negative pair {F_a^cls, F_n^cls}, which are passed through the CBP module to get the corresponding fused features F̂_ap^cls and F̂_an^cls. Similarity Classification. To determine whether or not the two inputs are similar to each other, we use a 3-layer MLP, denoted as g_τ, to approximate the complex matching function. Hence, S_ap = g_τ(F̂_ap^cls)  and  S_an = g_τ(F̂_an^cls) are the two predicted matching scores. The parameters of MLP are optimized by minimizing a binary cross-entropy loss. Note that it is easy to get the ground truth labels for classification since we know the relations among the training triplet. Therefore, the loss function for similarity classification is L_cls = BCE(S_ap, 1) + BCE(S_an, 0). §.§ Implementation Details Loss Function. The overall loss function for end-to-end training is the sum of the triplet and the classification loss L = L_triplet + L_cls. Network Training. For event frame generation, we use a temporal window of size Δ T = 25ms and follow the steps described in <cit.>. The image resolution of DAVIS346 is 346 × 260, but the resolution of the RGB images from the consumer camera is 640 × 480. Therefore, we resize all the images with higher resolution to be consistent with event frames and DAVIS RGB images. The number of cluster centers K in the NetVLAD layer is set to 64 and the dimension of local features D is 512, resulting in a (64 × 512)-dim feature vector for retrieval. The intermediate feature for classification is 4096-dim. It will be fused with another feature vector by the CBP module to obtain a 1024-dim feature for final similarity classification. The margin α in the triplet loss is set to 0.1, and we use a SGD optimizer with a learning rate of 0.1 to train the whole network in an end-to-end manner. Inference. During inference, the workflow is slightly different from training. The image database is pre-processed offline such that each RGB image has two feature vectors associated with it, one for retrieval and the other one for classification. Given an event query (i.e. event frame), we first perform image retrieval in the database to obtain the initial result which is sorted in descending order according to the feature distances. After narrowing down the search scope, we then re-rank the initial result by computing the similarity scores between the query and each image candidate. Therefore, the final result is several retrieval candidates sorted in descending order according to their similarity scores. § EXPERIMENTS In this section, we first conduct experiments on an event-based VPR dataset and compare our method with frame-based and event-based approaches quantitatively to demonstrate the performance under different scenarios. Next, we verify the effectiveness of cross-modal fusion and classification by comparing it with the naive implementation of simple retrieval. Finally, we discuss the limitations of our method and point out future directions to further improve it. §.§ Dataset The Brisbane-Event-VPR dataset <cit.> provides a benchmark for evaluating the performance of event-based VPR algorithms. It is captured in an outdoor environment and covers a distance of approximately 8 km along a route captured under 6 different scenarios with changing illumination conditions (i.e. daytime, sunset, sunrise, morning, and night). We divide each traverse into 3 splits that are geographically non-overlapping and use them for training, validation, and testing, respectively. The specific configuration is listed in Table <ref>. Except for the event data, this dataset also contains RGB images collected by an event camera (DAVIS346) and a consumer camera. Therefore, each data sample comprises a converted event frame, a DAVIS RGB image, a high-quality RGB image, and a GPS coordinate. Some examples are shown in Figure <ref>. §.§ Comparisons Given that our cross-modal method naturally includes a frame branch and an event branch for retrieval, these two branches can work independently and serve as our frame-based and event-based references, respectively. Although these two branches both possess a network architecture similar to NetVLAD <cit.>, they are trained with different data formats (i.e. event frames and RGB images). Here we carry out experiments on both high-quality RGB images and DAVIS RGB images in order to fully demonstrate the performance of the different methods. The database for retrieval is built using the test set from the daytime, and we use test samples from all 6 scenarios as queries to search the database. §.§ Metrics Following prior work <cit.>, we use Recall@N to evaluate the performance of different methods. In our experiments, we choose N={1, 5, 10, 15, 20, 30}. A match for the given query is considered as positive if the geographic distance is less than 35m. §.§ Performance in Different Scenarios The comparison results on the Brisbane-Event-VPR dataset under different scenarios are shown in Table <ref>, which is color coded to better visualize the data (red, green, blue, and purple indicate values in descending order). As can be observed, frame-based and event-based methods can achieve a 100% recall rate in the daytime setting given the query and database samples come from the same source. On contrary, obtaining similarly excellent performance in the cross-modal setting is unrealistic given the discrepancy between event queries and image samples. From data examples shown in Figure <ref>, we can discover that in most cases the image quality of a consumer camera is better than a DAVIS camera. Also, the converted event frame is capable of reserving most of the important information in a scene with less interference from illumination conditions. These properties are again proven in Table <ref>, where the frame-based method using high-quality images outperforms that of using DAVIS images in most of the scenarios, and the event-based method performs better than frame-based methods in poor illumination conditions. Though the cross-modal task is harder than the traditional VPR task, our method still benefits from some of the good properties of events and beats the frame-based methods in some challenging scenarios (see blue areas for our method). Further qualitative results of our method are indicated in Figure <ref>. §.§ Ablation Study To demonstrate the effectiveness of the proposed cross-modal fusion and classification for re-ranking, we compare our method with a naive alternative, that is, a simple pipeline only applies the retrieval function. The comparison results are shown in Table <ref>. As can be observed, the combination of retrieval and classification can boost performance by a large margin. §.§ Limitations Although our hybrid cross-modal pipeline has shown some promising results in the VPR task, there is still a gap in overall performance compared to frame-based and event-based methods. However, it provides a new perspective to utilize both event-based and frame-based sensors. Our purpose is to find an alternative for images in poor illumination conditions (e.g. night), not to totally replace the mature frame-based sensors in all circumstances. Therefore, we will focus on optimizing the network architecture in the future to further improve the retrieval performance and also explore the possibilities of 6DoF pose estimation given an event query. § CONCLUSIONS We have proposed the first cross-modal VPR framework that directly retrieves regular images for event queries. Strong benefits over a plain NetVLAD-style retrieval architecture have been obtained by adding a re-ranking module, comprising a compact bilinear pooling layer and a similarity classification network. Though the cross-modal nature of the retrieval task remains a big challenge, our method still achieves some competitive results compared to traditional frame-based alternatives in difficult illumination conditions. We believe that our method provides a new perspective to combine the advantages of two modalities and makes an important contribution towards the practicability of event-based VPR. IEEEtran
http://arxiv.org/abs/2307.01258v1
20230703180001
The Atacama Cosmology Telescope: High-resolution component-separated maps across one-third of the sky
[ "William R. Coulton", "Mathew S. Madhavacheril", "Adriaan J. Duivenvoorden", "J. Colin Hill", "Irene Abril-Cabezas", "Peter A. R. Ade", "Simone Aiola", "Tommy Alford", "Mandana Amiri", "Stefania Amodeo", "Rui An", "Zachary Atkins", "Jason E. Austermann", "Nicholas Battaglia", "Elia Stefano Battistelli", "James A. Beall", "Rachel Bean", "Benjamin Beringue", "Tanay Bhandarkar", "Emily Biermann", "Boris Bolliet", "J Richard Bond", "Hongbo Cai", "Erminia Calabrese", "Victoria Calafut", "Valentina Capalbo", "Felipe Carrero", "Grace E. Chesmore", "Hsiao-mei Cho", "Steve K. Choi", "Susan E. Clark", "Rodrigo Córdova Rosado", "Nicholas F. Cothard", "Kevin Coughlin", "Kevin T. Crowley", "Mark J. Devlin", "Simon Dicker", "Peter Doze", "Cody J. Duell", "Shannon M. Duff", "Jo Dunkley", "Rolando Dünner", "Valentina Fanfani", "Max Fankhanel", "Gerrit Farren", "Simone Ferraro", "Rodrigo Freundt", "Brittany Fuzia", "Patricio A. Gallardo", "Xavier Garrido", "Jahmour Givans", "Vera Gluscevic", "Joseph E. Golec", "Yilun Guan", "Mark Halpern", "Dongwon Han", "Matthew Hasselfield", "Erin Healy", "Shawn Henderson", "Brandon Hensley", "Carlos Hervías-Caimapo", "Gene C. Hilton", "Matt Hilton", "Adam D. Hincks", "Renée Hložek", "Shuay-Pwu Patty Ho", "Zachary B. Huber", "Johannes Hubmayr", "Kevin M. Huffenberger", "John P. Hughes", "Kent Irwin", "Giovanni Isopi", "Hidde T. Jense", "Ben Keller", "Joshua Kim", "Kenda Knowles", "Brian J. Koopman", "Arthur Kosowsky", "Darby Kramer", "Aleksandra Kusiak", "Adrien La Posta", "Victoria Lakey", "Eunseong Lee", "Zack Li", "Yaqiong Li", "Michele Limon", "Martine Lokken", "Thibaut Louis", "Marius Lungu", "Niall MacCrann", "Amanda MacInnis", "Diego Maldonado", "Felipe Maldonado", "Maya Mallaby-Kay", "Gabriela A. Marques", "Joshiwa van Marrewijk", "Fiona McCarthy", "Jeff McMahon", "Yogesh Mehta", "Felipe Menanteau", "Kavilan Moodley", "Thomas W. Morris", "Tony Mroczkowski", "Sigurd Naess", "Toshiya Namikawa", "Federico Nati", "Laura Newburgh", "Andrina Nicola", "Michael D. Niemack", "Michael R. Nolta", "John Orlowski-Scherer", "Lyman A. Page", "Shivam Pandey", "Bruce Partridge", "Heather Prince", "Roberto Puddu", "Frank J. Qu", "Federico Radiconi", "Naomi Robertson", "Felipe Rojas", "Tai Sakuma", "Maria Salatino", "Emmanuel Schaan", "Benjamin L. Schmitt", "Neelima Sehgal", "Shabbir Shaikh", "Blake D. Sherwin", "Carlos Sierra", "Jon Sievers", "Cristóbal Sifón", "Sara Simon", "Rita Sonka", "David N. Spergel", "Suzanne T. Staggs", "Emilie Storer", "Eric R. Switzer", "Niklas Tampier", "Robert Thornton", "Hy Trac", "Jesse Treu", "Carole Tucker", "Joel Ullom", "Leila R. Vale", "Alexander Van Engelen", "Jeff Van Lanen", "Cristian Vargas", "Eve M. Vavagiakis", "Kasey Wagoner", "Yuhan Wang", "Lukas Wenzl", "Edward J. Wollack", "Zhilei Xu", "Fernando Zago", "Kaiwen Zheng" ]
astro-ph.CO
[ "astro-ph.CO" ]
lanck.tex .tifpnG.pnG`convert #1 `dirname #1`/`basename #1 .tif`.pnG
http://arxiv.org/abs/2307.02468v1
20230705174351
An agile radio-frequency source using internal linear sweeps of a direct digital synthesizer
[ "Ethan Huegler", "Joshua C Hill", "David H Meyer" ]
physics.ins-det
[ "physics.ins-det" ]
Department of Computer Science, University of Maryland, College Park, MD 20742, USA DEVCOM Army Research Laboratory, 2800 Powder Mill Rd, Adelphi, MD 20783, USA david.h.meyer3.civ@army.mil DEVCOM Army Research Laboratory, 2800 Powder Mill Rd, Adelphi, MD 20783, USA Agile rf sources are a common requirement for control systems in quantum science and technology platforms. The direct digital synthesizer (DDS) often fills this role by allowing programmable control of the rf signals. Due to limitations of the DDS architecture, implementing an agile rf source requires rapid and precisely-timed programming of discrete updates that restrict the source's agility. Here, we describe a microcontoller-based interface that exploits the DDS's internal linear sweep accumulator to perform both sequential linear sweeps, and standard discrete updates, at the ∼10 µs scale. This allows updates to the swept parameter as fast as every 8 ns with greatly reduced communication and memory overhead. We demonstrate the utility of this system by using it as the reference to an optical phase-locked-loop to implement rapid, adjustable laser frequency sweeps in a Rydberg Electromagnetically Induced Transparency spectroscopy measurement. An agile radio-frequency source using internal linear sweeps of a direct digital synthesizer David H. Meyer August 1, 2023 ============================================================================================ § INTRODUCTION Quantum information science & technology requires agile radio-frequency (rf) sources to satisfy the many demands of quantum control: for direct application to quantum systems, as drivers for acousto-optic modulators (AOMs), electro-optic modulators (EOMs), or inputs to various phase-locked-loops (PLLs). These basic tools find applications in many different quantum platforms, including trapped ions,<cit.> neutral-atom Bose-Einstein condensates,<cit.>, superconducting qubits,<cit.> solid-state defect centers<cit.> and atomic clocks.<cit.> As control systems scale up to satisfy more challenging applications, the required agile rf sources also scale in number, making cost and availability important metrics along with their performance. Direct digital synthesizers (DDSs) are commonly used in these roles as they satisfy many of the desired characteristics.<cit.> For example, the Analog Devices AD9959<cit.> is a DDS that is widely used thanks to its four phase-synchronous outputs, where each has individually programmable amplitude, phase, and frequency.<cit.> Moreover, it incorporates an internal linear sweep capability that can dynamically vary an output parameter. These features allow for a highly flexible rf source that scales well with control system size. Agile waveforms often required in experimental systems are typically more difficult to implement with a DDS than other purpose-built technologies such as arbitrary waveform generators. The agility required (namely changes to rf amplitude, phase, and/or frequency on a timescale faster than relevant dynamics being investigated) must remain synchronous with a larger experimental control system. Because the DDS's local memory only stores the current and next instruction, changing the output rapidly and synchronously with a larger control system requires frequent, precise, fast communications. Current methods to solve this challenge use microcontollers or field-programmable-gate-arrays (FPGAs) to rapidly program the DDS outputs point-by-point from a stored memory of instructions based on external triggers.<cit.> This method allows for arbitrarily varying amplitude, frequency, and phase waveforms. However, it has limited time resolution because the finite communication speeds involved result in each update requiring ≳1 µs. Here, we demonstrate an alternative method of controlling an AD9959 DDS that circumvents this limitation, leading to a more agile rf source. Similar to the existing methods, we employ a microcontroller to program the DDS. However, instead of programming successive static outputs individually, we can also employ the linear sweep functionality of the DDS itself. By successively programming new linear sweeps, we can generate an agile waveform that is synchronous with external triggers. Thanks to reduced communication overhead to the DDS, this technique allows for much finer time resolution (down to ∼8 ns) when sweeping a single parameter. The technique maintains the ability to statically adjust all parameters at the DDS programming timescale (∼ 10 µs). Though this is insufficient for truly arbitrary waveform generation, it significantly broadens the range of applications to which the DDS can be applied. Furthermore, additional control can be obtained via augmentation with external voltage-controlled attenuators or phase shifters. This work describes the hardware and firmware necessary for implementing what we have dubbed the DDS Sweeper. We first provide a brief overview of DDS operational principles and limitations. We then introduce the hardware used to implement the DDS Sweeper followed by an overview of the custom microcontroller firmware that provides an interface between a computer and the DDS. We demonstrate the operation of the DDS Sweeper via simultaneous measurement of the amplitude, phase, and frequency of the outputs for various test waveform patterns. Finally, we provide an example of the DDS sweeper's utility by using it as the frequency reference to an optical PLL in a Rydberg Electromagnetically Induced Transparency (EIT) spectroscopy measurement. We use the sweeper to enable rapid successive optical frequency sweeps of varying rates within a single measurement. § DDS OPERATION A DDS produces an output through the use of a phase accumulator and functions as a programmable fractional-frequency divider of an externally provided reference clock frequency.<cit.> A phase accumulator stores two values, the current phase and the phase increment value. On each clock cycle (derived from the external reference), the phase increment is added to the current phase value. The phase and increment values are stored in 32 bit registers, and the operation depends on the modulo 2^32 overflowing when adding 32 bit integers. The current phase value is passed through a phase to amplitude converter (typically a sine lookup table) and that amplitude is used as the input for a digital to analog converter (DAC). The filtered output of the DAC will be a sine wave of the frequency which has been programmed into the DDS. To program a frequency into the DDS, that phase increment – or frequency tuning word (FTW) for the AD9959 – must be provided. The FTW can be calculated with FTW = f_out· 2^32/f_sys where f_out is the desired output frequency and f_sys is the system clock frequency of the DDS. By nature of its operation, the DDS system clock frequency should be as high as possible to prevent discretization error. It should also have low phase noise to limit the rf output phase noise. For the AD9959, the maximum system clock frequency is 500 MHz, which can either be provided directly, or via a programmable PLL multiplier from a lower frequency at the expense of higher output phase noise. The DDS also has the ability to control the phase and amplitude. There is a 14-bit phase offset register which stores a phase offset word (POW) which is added to the phase accumulator value before the phase to amplitude converter, allowing offsets spanning 0 to 360^∘. POW can be found by POW = ϕ· 2^14/360 where ϕ is the desired phase offset in degrees. The amplitude is controlled by a 10-bit amplitude scale factor (ASF) which sets the ratio of maximum possible output current to the desired output current. It can be found with ASF = r_I · 2^10 where r_I is the desired ratio of output to maximum current. The AD9959 DDS has additional functionality for performing a linear sweep of the output frequency, phase, or amplitude through the use of a sweep accumulator. The sweep accumulator is made up of a current value register and an 8-bit ramp rate counter register. When linear sweep mode is active, the current value register is added to the FTW, POW, or ASF, depending on the parameter being swept. Fig. <ref> shows the four parameters that define such a sweep: start point, end point, sweep delta DW, and ramp rate RR. The start and end points set the limits of the sweep while the sweep delta and ramp rate control the magnitude and duration, respectively, for each update of the accumulator. The start point, end point, and sweep delta register values are all calculated identically to the tuning word of the parameter being swept (i.e. FTW, POW, or ASF). The ramp rate is an 8-bit integer that is defined as RR = Δ t f_sync, where f_sync is the DDS sync clock which runs at one quarter of the system clock. The AD9959 also allows for distinct values of sweep delta and ramp rate depending on the direction of the sweep where going from start to end points is defined as rising, the opposite direction as falling. A dedicated digital input (Profile Pin) controls the sweep direction. While in linear sweep mode, the DDS decrements the ramp rate counter register on every cycle of the sync clock. When the ramp rate counter reaches zero, the profile pin is checked. If the profile pin is logic high (low), the rising (falling) delta word DW is added to the current value register and the ramp rate counter register is reset to the rising (falling) ramp rate RR. Once the output of the phase accumulator and the sweep accumulator add up to either the start or end point the output is held constant. All of the control values – FTW, POW, ASF, and other configuration options – are stored in registers on the DDS which can be written to via a serial interface. The DDS is equipped with an input buffer which stores all the writes it receives over the serial interface until an IO update signal is received. Upon receiving an IO update signal (IOUD) all of the registers are updated at once with their new values. Using a microcontroller to quickly write to this buffer and trigger updates allows the DDS to produce an agile waveform. As mentioned in the Introduction, there are existing solutions which implement agile waveforms via rapid static updates (i.e. only changing FTW, POW, and ASF).<cit.> These solutions do not utilize the DDS's linear sweep capability, largely because this mode can linearly sweep only a single parameter and is more challenging to implement than static updates. However, it is still desirable to use the native linear sweeps of the DDS in many applications. Figure <ref> highlights the differences between a static update sweep (orange) and a native linear sweep (blue) when generating a linearly ramping frequency that then resets to its initial value. Using the native linear sweep, the DDS is only programmed twice (denoted by the corresponding IOUD trigger pulses), once for the sweep and once for the reset. The static update sweep requires programming of the DDS at each step. Since the programming time is finite, the sweep itself is discretized. By using the native linear sweep capability of the DDS, we can obtain higher quality linear sweeps with significantly lowered communication overhead between the microcontroller and the DDS itself. Furthermore, since communication rates between a host computer and the microcontroller are often limited as well, the large table of pre-programmed values necessary for the static update sweep leads to long programming times (∼ 1 s). This hinders rapid iteration of sweep parameters. The reduced overhead of native linear sweeps can therefore improve overall experiment control sequence programming time. § HARDWARE The Sweeper is implemented using an Analog Devices 9959 (AD9959) evaluation board, and a Raspberry Pi Pico microcontroller to interface with a control system,<cit.> as seen in Fig. <ref>. The microcontroller is soldered to a custom interfacing PCB which provides the necessary routing and slots in the header pins of the DDS evaluation board. The DDS evaluation board also requires 3.3 V and 1.8 V power inputs, which the microcontroller can supply with the help of a linear regulator. The DDS outputs are programmed by sending serial commands to the microcontroller via a USB interface. The microcontroller interprets these commands and programs the appropriate registers of the DDS via a Serial Peripheral Interface (SPI) at 62.5 Mbits/s.<cit.> The Raspberry Pi Pico was chosen as the microcontroller for its Direct Memory Access (DMA) channels and Programmable Input Output (PIO) cores. DMA channels are dedicated hardware for moving memory without utilizing the main core. We use the DMA to send the instruction table stored in memory to the SPI controller as quickly as possible while allowing the main core to simultaneously prepare the next instruction. The PIO cores are independent processor cores with limited functionality but direct access to the GPIO pins. The Pico's PIO cores allow the Sweeper to precisely time multiple aspects of its operation, including: programming of the registers via an SPI interface, the IO Update (IOUD) triggers, and the profile pins that control sweep directions. In the default configuration, the microcontroller also provides its 125 MHz system clock to the DDS as a reference clock. With the default PLL multiplier of 4 this gives the DDS a system clock of 500 MHz, which is the maximum system clock supported by the AD9959. The Sweeper can also be configured to allow an external reference clock for the DDS. § FIRMWARE The Sweeper operates in two modes: manual mode or buffered execution mode. Buffered mode has three sub-modes: single steps, linear sweeps, or a combination of both In manual mode, instructions sent to the Sweeper over the USB interface update the outputs of the DDS in real time. In buffered execution mode, the Sweeper accepts a sequence of instructions which it stores in memory. At a later point the Sweeper can recall the sequence and successively program the instructions into the DDS. To keep the outputs synchronized with a larger system, the buffered execution process can be triggered from an external source for each step of the sequence, or the microcontroller can time itself based on the Pico's clock. The microcontroller writes each instruction of a sequence to the input buffer of the DDS immediately after the previous IO update has completed. When the time for updated output arrives, the IO Update signal is sent to the DDS and the next instruction is written to the input buffer of the DDS. This minimizes the delay between triggers and output updates. When receiving external trigger signals, the Sweeper has a pipeline delay of 4 clock cycles, at 125 MHz that will be 32 ns ± 8 ns, since the microcontroller buffers GPIO inputs to occur on clock cycle edges. A buffered sequence can be programmed in one of three ways: as single steps, a sequence of linear sweeps, or a combination of both. Single stepping allows discrete changes to frequency, amplitude, and phase parameters simultaneously, replicating the mode of operation that others have implemented. The sweep and combination modes utilize the DDS linear sweep functionality, but require setting more registers of the DDS and therefore require a longer dwell period in between instructions to send all the required bytes, as seen in Tab. <ref>. Similarly, the instructions for sweep and combined modes are longer so fewer of them can be stored in memory. Instructions are only stored for the channels being utilized, and the maximum number of instructions for each mode of operation can be seen in Tab. <ref>. Once the sequence is programmed, sending a start command to the Sweeper will begin sending the instructions stored in memory. Since system memory will not persist through a power cycle, there is functionality to store and recover instruction sequences from non-volatile storage on the microcontroller. Users send instructions over USB to the microcontroller with the desired outputs in units of Hertz, Degrees, and percentages. The microcontroller calculates the tuning words from those values and translates them into the expected bit alignment for the DDS. The tuning resolutions for frequency, phase, and amplitude are 0.022^∘, 0.116 Hz, and 0.1% of maximum output current, respectively. If it is desired for the Sweeper to time itself, an additional parameter can be sent with the number of clock cycles the instruction should take. At runtime, these wait lengths are sent to a PIO core through DMA to handle the timing, based on the Prawnblaster Psuedoclock project.<cit.> §.§ Linear Sweep Control Performing arbitrary sweeps on the DDS as part of a sequence requires great consideration of the internal operational details of the AD9959 DDS. A reliable rising sweep depends upon the sweep accumulator being at zero when the sweep begins. If one sweep follows another, the sweep accumulator will not generally be at zero. The AD9959 does have an “autoclear sweep accumulator” functionality, which, when enabled, will reset the sweep accumulator to zero upon an IO Update signal. This does allow for running successive upward sweeps. The AD9959 implementation of linear sweeps does not inherently support arbitrary falling sweeps. This is the because the sweep accumulator is an unsigned register that is always interpreted as positive and therefore added to the start point value: there is no way to subtract the sweep accumulation from the start point. Since the sweep delta is not applied if the current value is greater than the end point, the combined result is if the start point is larger than the end point the frequency output will remain constant at the start point value. To run a falling sweep the start point must be programmed as the lowest output desired from the sweep, and the end point programmed as the highest output desired from the sweep. Then the sweep accumulator must first be filled up by a rising sweep so that when the profile pin is set to low the falling sweep delta can be subtracted from the sweep accumulator until the sweep accumulator holds a value of zero and the DDS output is equal to the value programmed into start point. Since an arbitrary sequence cannot guarantee that all falling sweeps will be preceded by a rising sweep, we implement a hidden rising sweep before a falling sweep. The effect of these hidden sweeps is minimized by setting the rising sweep delta to its maximum value, then the sweep accumulator will be filled up by just one instance of the rising sweep delta being applied. If the rising ramp rate is set to 1, then it will only take one cycle of the sync clock (4 cycles of the system clock) to fill up the sweep accumulator (with the default 500 MHz system clock this is 8 ns). This works with the limitation that the falling ramp rate must be set to 1 to match the rising ramp rate that filled the accumulator. This limitation was determined empirically as the AD9959 behavior in this circumstance is not well documented. With this limitation in mind, the minimum sweep rates differ between rising and falling sweeps. The maximum frequency sweep rate is ±62.5 GHz/s while the minimum sweep rates are 57 kHz/s and -14.5 MHz/s. For the amplitude scale, the maximum rate is ±100%/8 ns with minimum rates of 100%/2.1 ms and -100%/8.2 µs. The phase maximum rate is ±360^∘/2.1 µs with minimum rates of 360^∘/33.43 ms and -360^∘/133.3 µs. An alternative solution to running downward sweeps is to turn the autoclear sweep accumulator functionality off for downward sweeps. Somehow this allows successive downward sweeps, but it makes the range of downward sweeps dependent on preceding upward sweep. If an upward sweep only fills the sweep accumulator half way, the next downward sweep will not be able to drain more than the half of the sweep accumulator, even if it was programmed to sweep over the full range of the accumulator. § EXAMPLE OUTPUTS To confirm functionality of the sweeper, we implement independent measures of the amplitude, frequency, and phase of the output(s). Due to the short timescales of the dynamic features of the DDS Sweeper, these measurements must be able to resolve similar timescales. To measure the amplitude, we use an oscilloscope with a 1 GHz bandwidth. The relative phase between two channels of the DDS (one serving as a fixed reference) is measured using a phase frequency detector provided by a HMC439 evaluation board.<cit.> The outputs of the detector are recorded by the same oscilloscope. The frequency is measured using a delayed self-homodyne measurement. The output of the recombining mixer is low-pass filtered and measured directly by the same oscilloscope. Delay line length and operating frequency shown in the plots were chosen to ensure the homodyne output was centered within the linear regime. By appropriate power splitting and amplification, all three measurements could be performed simultaneously on a single output. In Fig. <ref> we show demonstrative successive sweeps of the individual parameters. Each sub-figure represents a single sweep on a distinct parameter. The sub-figures also show the update pulses that mark when the microcontroller sent a new instruction to the DDS. For all of these sequences, the DDS Sweeper uses its own internal timing to determine the start of each instruction. Fig. <ref>(a) shows a measurement of a constant 100 MHz output from the DDS as the amplitude scale factor is swept. In Fig. <ref>(b), two outputs of the DDS are kept at a constant 100 MHz. The output of channel 0 is held at a constant phase offset while the channel 1 output is run through a sequence of phase changes spanning 0 to 2π. Fig. <ref>(c) shows a measurement of the output frequency. Note that this sequence includes successive upward and downward linear sweeps with different sweep rates. These sequences were chosen to have a mixture of linear sweeps and discrete jumps of the varying parameter in order to demonstrate the flexibility of the DDS Sweeper. Fig. <ref> shows the Sweeper operating in Sweep and Step mode, where a single parameter can employ linear sweeps and the other parameter can be discretely changed at each update. Here the frequency is the parameter being swept (c) while amplitude (a) and phase offset (b) are stepped simultaneously. Part (d) shows the the update pulses from the microcontroller. This dynamic waveform only sends 9 instructions to the DDS, greatly reducing communication overhead. In its default configuration, the DDS Sweeper derives the DDS reference clock from the Pico's on-board crystal oscillator. Because this reference is used directly to produce the outputs, it's phase noise will have a strong impact on the phase noise of the outputs. Using a Berkeley Nucleonics 7340 phase noise tester,<cit.> we measured the phase noise of the DDS when clocked directly from the Pico at three frequencies: 100.3, 75.1, and 40.1 MHz, as shown in Fig. <ref>. The overall noise floor is approximately 20 dB higher than the minimum noise of the DDS (when using the on-board PLL and multiplier) and the large peak around 200 kHz from the PLL is more pronounced. This is to be expected as the Pico's crystal oscillator is not designed to be highly performant. If improved phase noise is required for a given application, the DDS Sweeper can be configured to allow for an externally-provided reference of the DDS. § EXAMPLE USAGE Finally, we demonstrate the sweeper's utility by performing a rapid series of spectroscopy scans of varying duration. This is accomplished by using a channel from the Sweeper as the reference for an optical Phase-Locked-Loop. This method of stabilization compares the rf beatnote between two lasers. A first (reference) laser is independently stabilized to atomic spectroscopy. The second (controlled) laser receives feedback from the optical PLL to stabilize the beatnote to the frequency set by the rf reference (DDS sweeper). As the rf reference is swept, the controlled laser will be swept. Using an agile rf reference allows for agile, precise changes in the control laser's frequency. Our demonstration measures Rydberg Electromagnetically-Induced-Transparency (EIT) spectroscopy, using the experimental apparatus and technique described in Ref.  meyer_assessment_2020. This measurement involves a 780 nm probe laser and a 480 nm coupling laser that counter-propagate through a room-temperature vapor cell filled with natural isotopic abundance rubidium. The probe laser nominally couples the |5S_1/2⟩ ground state with the ^85Rb |5P_3/2⟩ first excited state. Its frequency difference (detuning) relative to the non-Doppler-shifted atomic resonance at δ_p=0 is controlled via the optical PLL described previously. The coupling laser couples |5P_3/2⟩ to the |56D_5/2⟩ Rydberg state and is kept resonant with this transition. When both lasers are resonant, EIT is established between the ground and Rydberg state, leading to reduced atomic absorption of the probe light. See Figure <ref>(a) for the level diagram. By using an optical homodyne measurement of the transmitted probe light, we measure the corresponding phase shift of the probe field. The photodiode output is recorded using a 50 Ω terminated oscilloscope. The goal of this demonstration is to empirically determine, in single continuous measurements, how quickly the probe laser could be swept through resonance without introducing errors or distortion to the dispersive EIT signal. To this end, we program the Sweeper to linearly sweep in frequency between two fixed points at variable rate, reset to the sweep start point, then dwell for 1 ms to allow the probe laser frequency to settle before the next sweep. Figure <ref>(c) shows a single output spectroscopic signal trace as the sweep rate was increased from 100 µs to 600 ms at multiples of 1, 3, and 6. The lightly shaded regions denote the 1 ms reset window between each sweep where the reverse EIT signal can be seen as the optical PLL moves the probe frequency back to the sweep start at a finite rate due to the instantaneous frequency change. From this measurement, it is clear that the critical sweep rate for which slower sweeps do not distort the signal is between 100 and 300 µs. A second timetrace is shown in Figure <ref>(d), with sweep times spanning 250 to 300 µs in steps of 10 µs. These sweeps show the critical sweep rate corresponds to approximately 270 µs. Figure <ref>(b) shows the sweeps with 600, 30, 1, 0.3, 0.27, and 0.25 ms periods versus probe detuning δ_p. The apparent wider trace widths as the sweep time is increased is an artifact of each sweep being sampled at the same rate, which makes the longer sweeps more susceptible to electronic noise. However, there is also evident a slight broadening of the dispersive feature as the sweep time is increased, attributable to systematic errors in the measurement at longer timescales (such as finite laser frequency stability or varying background electric fields). As the sweep time is decreased, the optical PLL bandwidth is reached. This results in the EIT feature dragging away from resonance and observable local ringing in the frequency, as seen in the 270 and 250 µs sweep traces of Figure <ref>(b). § CONCLUSION We have described a custom firmware for the Raspberry Pi Pico microcontroller that can control an Analog Devices AD9959 four-channel DDS. By exploiting the inherent linear sweep accumulators of the DDS, our implementation (the Sweeper) achieves agile rf outputs and fast updates that are synchronous with an external control system. We also demonstrated various types of waveforms that can be produced. Moreover, by referencing an optical phase locked loop to the Sweeper, we highlighted a example application by performing spectroscopic sweeps of a Rydberg EIT signal. The Sweeper is capable of filling an important role in any application where simple, agile, rf sources are ubiquitous. Quantum science experiments are a noteworthy example where demands for increased scale could make it advantageous compared to contemporary alternatives. Beyond its performance, the device consists of readily-available cost-effective components. These features make the Sweeper a candidate for replacing or supplementing more expensive rf sources used in many experiments. There are multiple potential means of enhancing the DDS Sweeper firmware. The Sweeper currently uses the standard SPI protocol for communications between the Pico and the AD9959. This protocol could be modified to use a custom multi-channel SPI version (supported by the DDS) allowing up to a factor of four decrease in the time required to program the DDS at each update. The serial interface between the control computer and the Pico could also be improved by using a more efficient character encoding than the standard ASCII we have implemented. This could decrease the time required to send instructions to the DDS Sweeper by approximately a factor of two. Finally, the flash memory of the Pico could be leveraged to increase the total number of instructions that can be programmed in a single sequence. We wish to acknowledge Kevin Cox for helpful discussions. EH recognizes financial support from the National Security Scholars Summer Internship Program (NSSSIP). The views, opinions and/or findings expressed are those of the authors and should not be interpreted as representing the official views or policies of the Department of Defense or the U.S. Government. § AUTHOR DECLARATIONS §.§ Conflict of Interest The authors have no conflicts to disclose. §.§ Author Contributions E. Huegler: conceptualization (supporting); software (lead); investigation (lead); visualization (equal); writing – original draft (equal); writing – review and editing (equal) J. C. Hill: investigation (supporting); writing – review and editing (equal) D. H. Meyer: conceptualization (lead); investigation (supporting); visualization (equal), writing – original draft (equal); writing – review and editing (equal) § DATA AVAILABILITY STATEMENT The data presented are available from the corresponding author upon reasonable request.
http://arxiv.org/abs/2307.00933v1
20230703111542
Data-Driven Information Extraction and Enrichment of Molecular Profiling Data for Cancer Cell Lines
[ "Ellery Smith", "Rahel Paloots", "Dimitris Giagkos", "Michael Baudis", "Kurt Stockinger" ]
cs.CL
[ "cs.CL", "cs.CE", "cs.DB" ]
]Data-Driven Information Extraction and Enrichment of Molecular Profiling Data for Cancer Cell Lines ^1Zurich University of Applied Sciences, Switzerland, ^2University of Zurich, Switzerland, ^3Swiss Institute of Bioinformatics, Switzerland, ^4Infili Technologies, Greece Smith, Paloots et al. Data Enrichment for Cancer Cell Lines Motivation: With the proliferation of research means and computational methodologies, published biomedical literature is growing exponentially in numbers and volume (). As a consequence, in the fields of biological, medical and clinical research, domain experts have to sift through massive amounts of scientific text to find relevant information. However, this process is extremely tedious and slow to be performed by humans. Hence, novel computational information extraction and correlation mechanisms are required to boost meaningful knowledge extraction. Results: In this work, we present the design, implementation and application of a novel data extraction and exploration system. This system extracts deep semantic relations between textual entities from scientific literature to enrich existing structured clinical data in the domain of cancer cell lines. We introduce a new public data exploration portal, which enables automatic linking of genomic copy number variants plots with ranked, related entities such as affected genes. Each relation is accompanied by literature-derived evidences, allowing for deep, yet rapid, literature search, using existing structured data as a springboard. Availability and Implementation: Our system is publicly available on the web at <https://cancercelllines.org>. Contact: The authors can be contacted at ellery.smith@zhaw.ch or rahel.paloots@uzh.ch. [ Ellery Smith^∗^1, Rahel Paloots^∗^2,3, Dimitris Giagkos^4, Michael Baudis^2,3, Kurt Stockinger^1 August 1, 2023 ==================================================================================================== ^∗To whom correspondence should be addressed. § INTRODUCTION Cancer research is one of the most challenging and promising biomedical areas as reflected in the amount of attention it receives (). Cancer cell lines are important models for the study of cancer-related pathophysiological mechanisms as well as for pharmacological development and testing procedures. Cell lines are obtained from patient-derived malignant tissue and are cultivated in vitro, potentially in an "immortal" way. Cancer cell lines are supposed to retain most of the genetic properties of the originating cancer (), including genomic modifications that are characteristic for the respective disease's pathology and are absent in normal tissues. A class of mutations ubiquitous in primary tumors and derived cell lines are genomic copy number variants (CNVs) which represent structural genome variations in which genomic segments of varying sizes have been duplicated or deleted from one or both alleles. The set of CNVs observed in a given tumor (“CNV profile”) frequently includes one or multiple changes characteristic for a given tumor type. For instance, while many colorectal carcinomas display duplications of chromosome 13 (), neuroepithelial tumors frequently show small, often bi-allelic deletions involving the CDKN2A gene locus on the short arm of chromosome 9 (). Recurring CNV events are supposed to be driven by their selective advantage for cancer cells, i.e. recurrently duplicated regions predominately will affect genes favorable for a clonal expansion (“oncogenes”) and, conversely, deleted regions will frequently contain growth-limiting (“tumor-suppressor”) genes (). The collection and comparative analysis of cancer and cancer cell line CNV data is important for the understanding of disease mechanisms as well as the discovery of potential therapeutics. Progenetix () is a knowledge resource for oncogenomic variants, mainly focusing on representing cancer CNVs. A recent spin-off from the Progenetix resource is cancercelllines.org - a database dedicated to genomic variations in cancer cell lines. In addition to CNVs, cancercelllines.org also includes information about sequence variations such as single nucleotide variants (SNVs), assembled from the aggregation of genomic analysis data of cell line instances. Currently over 16,000 cell lines from over 400 different cancer diagnoses are represented in this resource. Natural language processing (NLP) has proven to be a game-changer in the field of clinical information processing for attaining pivotal knowledge in the healthcare domain (). In fact, numerous studies have been undertaken in exploring indirect relations between drugs, diseases, proteins and genes from unstructured text provided in literature resources. One among many is (), where the authors systematically design an NLP pipeline for drug re-purposing via evidence extraction from PubMed abstracts. Even though such studies exhibit some promising performance, neither ground truth is considered for further relevance evaluation of discovered drug-cancer therapeutic associations, nor visualization of results is provided. Additionally, SimText (), a text mining toolset built for visualization of similarities among biomedical entities, manages to extract and display knowledge interconnections from user-selected literature text. However, no quantitative metrics were presented for evaluating the efficiency of the utilized NLP methods. In this paper we study how to use state-of-the-art information extraction algorithms such as LILLIE () to identify known mutated genes and find out which genes are most likely affected in certain CNV regions. As a result, we introduce a novel data exploration system, allowing for the dynamic visualization and exploration of previously orthogonal data models by extracting and enriching information from both structured and unstructured data. The user will be able to visualize gene information extracted by our algorithm on the CNV profiles of cancer cell lines. The source code for our system is available to the public on GitHub[<https://github.com/progenetix/cancercelllines-web>]. § METHODS AND MATERIALS §.§ Proposed Method In this paper we propose a novel end-to-end methodology that combines information extracted from unstructured text (i.e., publication abstracts from PubMed) with structured knowledge resources (i.e., Progenetix and cancercelllines.org) in order to construct an interface for exploratory analysis of positionally mapped genomic variations based on literature evidence. Our work mainly consists of two parts: i. fine-tuning LILLIE (), a state-of-the-art information extraction tool, in the cancer cell lines context and ii. development of a portal that serves as the interface for linking various genomic CNV findings with evidence extracted from literature text. More specifically, we use cell lines as a jumping-off point to provide our literature extraction results. For each cell line, we visualize a corresponding CNV plot, which is annotated by selected extracted genes, and a categorized, ranked list of related entities, as shown in Figure <ref> and on the results page of our system[<https://cancercelllines.org/cellline/?id=cellosaurus:CVCL_0312>]. We provide the most relevant evidence for the given result alongside the title of each paper, allowing the user to easily check the validity of the result, and a toggle to expand each result, revealing the full annotated abstract text, as shown in Figure <ref>. §.§ Information Extraction from Unstructured Text While there are many existing systems which focus on either the topic of biomedical text extraction () or the creation of knowledge graphs from text (), the main challenge of our approach was to merge these two concepts with an existing structured database, such that both can be explored in parallel, and provide complementary information in a streamlined fashion. Rather than using a known benchmarking dataset for either information extraction or knowledge graph creation, as for example explored in , we designed our system using an existing live knowledge base, with a focus on pragmatic data exploration of real-world data, rather than test-set performance. In biomedical text extraction, particularly when the domain is narrow, rule-based and dictionary-based approaches for entity-relation extraction have been shown to give comparable performance to learning-based methods (). As such, we use an enhanced version of the LILLIE triple extractor (), where the learning-based component has been removed, and the rules have been fine-tuned for this specific use case. The result is an automatic information extractor with high precision and significant performance increase over equivalent methods (see Section <ref>). §.§.§ Triple Extraction Rather than other recent work, which focuses on recognizing a predecided set of relations () we use the open information extraction paradigm () to extract any potential relationship between entities in the text, namely, as natural language subject-predicate-object triples. The use of this model allows a researcher to explore richer and more descriptive relations between entities than if they were mapped to discrete categories, and takes into account the fact that relationships in oncogenomics are often complex and subtle. Thus, open information extraction methods, coupled with domain expertise, was determined to be optimal for this use case. The format of these triples is described in detail in . We firstly run the LILLIE system on the abstracts of all research articles in the Progenetix corpus, then we use dictionary-based methods () to match the subject and object with their corresponding entities in a custom ontology. An sample of this ontology is shown in Figure <ref>, which depicts a portion of the resultant subgraph for the cell line HeLa, derived from Cellosaurus. We used the following data sources to construct the graph ontology (graph metrics are shown in Section <ref>): * The cancer section of the NCIt thesaurus () * The UBERON anatomical ontology () * The Cellosaurus cell-line index () * Cytogenetic mapping information from Progenetix () * The HUGO gene nomenclature () We then place these triples in a graph database, as shown in Figure <ref>. Unlike in other works on biomedical knowledge graph building (), we do not infer any relations using the graph itself. Only relations directly implied by the text are present in the graph, as shown in Figure <ref>. By contrast, attempt to synthesize whether a relationship exists in between two nodes based on existing relationships using a knowledge graph embeddings approach. Our aim here is to provide a link between existing evidences (between natural language and structured data), rather than synthesize new knowledge using machine learning methods. §.§.§ Pair Extraction While triple extraction can expose deep semantic relations between entities, this approach does not necessarily provide a complete representation of all relationships within the text, as it only extracts predicates that are directly expressed as singular verb phrases. An example of a strong sentential relationship extracted by triples is shown in the abstract in Figure <ref>, whereas long-distance relationships as shown in Figure <ref> are not currently reliably extractable using similar methods. This is a known shortcoming of current information extraction techniques, and recent efforts such as BioRED () have attempted to mitigate this deficiency by providing a corpus of long-distance relations that may span an entire document. However, the BioRED corpus is limited in both the number of relationship annotations and the fact that no specific annotations for cell lines are provided. A such, we augment our high precision triples with an additional high-recall method to capture long-distance relations, using simple information retrieval techniques such as term distance. This method can perform well on small text documents such as abstracts, as shown in Figure <ref>, where a complex relationship between Detroit 562 and TP53 is extracted using simple term-distance metrics, but would have a lower weighting than a triple-based relationship due to it being a weaker inference. This is similar to a question-answering task on small text documents, and metrics such as this have been shown () to give comparable or superior performance to more complex semantic analysis methods, even on more involved relationships. Naturally, if two entities are present in the same textual snippet, they are likely related in some manner, though this is not easily represented in the standard subject-predicate-object model. As such, we augment our triple extraction with what we term as pair extraction, where we extract subject-object pairs, but leave the relation expressed as a simple numerical quantity. We combine these pair extractions with triples with a simple linear weighting system to produce a more representative ranking in the final output. Examples of the different extraction methods can be seen by comparing Figures <ref> and <ref>, where the cell line HeLa is explicitly linked with EGFR through a triple relation in Figure <ref>, showing a strong evidence. A weaker long-distance relationship exists in Figure <ref> between Detroit 562 and TP53. State-of-the-art information extraction techniques are not capable of finding such a link; but, by highlighting a potential relationship to the user, through an interface, a researcher can be allowed to make a judgement on it, or use it as a jumping-off point for discovering potentially new information. § RESULTS In this section we will fist apply our information extraction system for analyzing various cancer types. Afterwards we will evaluate the performance of our automatic information extraction pipelines. In particular, we want to address the following two research questions: * Research question 1: How well does our information extraction pipeline work for studying cancer cell lines and for exploring potentially new information? * Research question 2: What is the performance of our automatic information extraction algorithm for combining structured and unstructured data, i.e. from a database for cancer cell lines and research abstracts from PubMed? §.§ Example Use Cases To validate the efficacy of our approach, we analyzed the results of our novel information extraction pipeline and how the extracted data corresponds to cell line CNV profiles. We will now illustrate how to analyze two different cancer types using our approach with the help of two example use cases. §.§.§ Head and Neck Squamous Cell Carcinomas - Cell Line Detroit 562 Figure <ref> depicts the CNV profile for Detroit 562 - a pharyngeal squamous cell carcinoma cell line (NCIT code C102872). Pharyngeal squamous cell carcinoma is a part of head and neck squamous cell carcinomas, often related to smokers. The results of our information extraction pipeline for genes AURKA and WEE1 claim that these genes are highly expressed and down-regulated[These results can be reconstructed here: <https://cancercelllines.org/cellline/?id=cellosaurus:CVCL_1171>] respectively in cancers, see (). This information is confirmed on the CNV profile where AURKA is duplicated and WEE1 is deleted. Similarly, MYC gene is brought forward as a possible target due to high expression and the region is duplicated on the CNV profile as well. Figure <ref> also indicates TP53, a tumor-supressor gene involved in the control of cell division located on the short arm of chromosome 17. Due to its inhibitory role on cellular expansion, it is a frequent target of genomic deletions in a variety of cancers. However, TP53 can also acquire gain-of-function mutations that contribute e.g. to radio-resistance, thus explaining the duplication in this region in the case of a mutant allele (). Conversely, NGF - a gene that is reported to be expressed in Detroit 562, exhibits alleleic deletion in our CNV data (), points towards alternative mechanisms responsible for its transcriptional activation. §.§.§ Breast Carcinomas - Cell Line MDA-MB-453 Breast cancer is the most common cancer type in women, affecting more than 250,000 women in the US alone, see (). In breast cancer several clinico-pathological parameters have been recognized. One of the rare but clinically especially aggressive variants is the “triple-negative” subtype, i.e. where the tumor cells do not express 3 receptors commonly targeted in hormonal and immunotherapy: estrogen receptor, progesterone receptor and ERBB2 (HER2) receptor. Cell line MDA-MB-453 is a breast cancer cell commonly used to represent the triple-negative expression profile[<https://www.cellosaurus.org/CVCL_0418>]. However, using our information extraction pipeline we could match this cell line to a publication that claimed its expression of ERBB2[These results can be reconstructed here: <https://cancercelllines.org/cellline/?id=cellosaurus:CVCL_0419>], see (). Indeed, in our CNV data from 16 instances of MDA-MB-453 we can observe genomic duplications involving the ERBB2 locus on 17q (see Figure <ref>). While another paper claims PTEN to be expressed in MDA-MB-453 () the CNV profile does not indicate a genomic duplication event as causative and therefore indicating transcriptional de-regulation. We also matched this cell line to 2 papers where mutation in KRAS was confirmed by our SNV data. Moreover, the expression of PIK3CA was confirmed by the duplication on the CNV profile as well as the mutation of the same gene was detected in the SNV data, see (). Thus, to answer Research Question 1: We show here that our novel information extraction feature facilitates further research into cancer cell lines. We were able to prove some known gene expression levels for cell lines Detroit 562 and for MDA-MB-453. Moreover, we could discover some new or conflicting information about some other genes. More insights about how to reconstruct the exploration of these use cases with our system can be found at <https://docs.cancercelllines.org/literature-data/>. §.§ Information Extraction After applying our information extraction pipeline for studying various cancer types, we will now evaluate the performance in terms of accuracy and processing time of our system. §.§.§ Data Exploration The Progenetix and cancercelllines.org resources provide PubMed identifiers for articles with a direct relation to genomic analyses in cancer cell lines. Crawling the PubMed database from these identifiers resulted in a corpus of 52,412 textual abstracts, which were used by our system to generate our graph database. As shown in Table <ref>, we find 770,230 total entity matches, leading to a total of 12,139 distinct nodes in our graph. §.§.§ Information Extraction Performance We evaluated our system on the BioRED NER benchmark () to gain an approximate idea of the performance of our system. While the BioRED evaluation metrics differ somewhat to the task we are performing (since we do not include entity types such as species or chemical, and our entity spans differ to their model), we were able to evaluate the match accuracy on a per-paper level. In this case, our system achieves an accuracy of 91.8% on the test set when identifying whether genes and cell lines are relevant to a paper, which is comparable to 93.5% for PubMedBERT, as reported by . However, PubMedBERT was trained specifically on the BioRED corpus, unlike ours. Currently, no exact benchmark exists for extracting and weighting relations between genes and cell lines; however, we provide instead a qualitative analysis in Section <ref>, demonstrating the usage of our system on real-world data to discover new knowledge. For this work, we used an enhanced version of the LILLIE triple extractor system, tailored for merging natural language and structured data. The primary improvement was in customizing the parameters and modifying the rule table of the rule-based component to suit the linguistic patterns used to describe cell lines, genes and cytobands, and in trimming down our system to only the high-precision rule-based component. As shown in , this increases the precision of the extracted triples, and removes the need for additional training data and reduces the human effort in developing a custom validation set. By leveraging the flexibility of our rule-based component, we could adjust the precision-recall balance directly, based on qualitative output, to produce the desired results, without the need for costly and open-domain deep-learning approaches. These modifications demonstrate the strength and flexibility of the LILLIE extraction approach, where the output could be adjusted easily on-the-fly based on the current use-case requirements. The average end-to-end time taken to add new abstracts to our database is 3.56 seconds per paper, measured on a system with the following specifications: Intel Xeon W-11855M CPU @ 3.20GHz, 64GB RAM, NVIDIA RTX A4000 GPU. Our system includes a mechanism for adding new entries to the database from a provided list of PubMed IDs, and this time includes: crawling, information extraction, pair indexing and database construction (as described in Section <ref>). We believe that these result, in combination with Table <ref> and our performance analysis on the BioRED benchmark, answers Research Question 2 posed at the beginning of this section. § DISCUSSION Starting from a domain specific resource for curated genomic and associated data in cancer cell lines we extended its "classical" online database paradigm towards a knowledge exploration resource through the implementation of our novel literature information extraction algorithms. This change enables researchers to use the existing data - such as annotated genomic variations, visual indication of structural variation events and disease-related annotations - to gain context specific insights into molecular mechanisms through exploration of the added literature-derived information either directly or to prioritize follow-up analyses. We could show that the extracted results can easily be related to the resource's hallmark CNV profiles and this combination opens possibilities for knowledge expansion, including the critical evaluation of pre-existing annotations which may be affected by the fast-mutating nature of cancer cell lines. In our information extraction implementation we have shown that an interactive bimodal exploration model can be achieved in a streamlined manner, even if one data source comprises unstructured information. Ubiquitous application of high-throughput molecular analyses as well as their interpretation in an ever increasing amount of publications drive a “data deluge” in biomedical research. Our work demonstrates an application of information extraction techniques to add a knowledge exploration dimension to a genomic data resource. By doing so, we provide a tool to increase the speed and depth of scientific research using computational linguistic methods. For this work, we provide an evaluation on a subset of the BioRED corpus (and benchmarking of our information extractor is provided in ); however, there is no currently existing benchmark for extracting relations specifically between genes and cell lines. Instead, we provide a qualitative analysis in Section <ref>, demonstrating that our system can be applied on real-world data to discover new knowledge. We envision our system as a tool to dynamically discover novel data in tandem with a domain expert, rather than a traditional approach that can be directly evaluated using an existing benchmark. § ACKNOWLEDGMENTS This project has received funding from the European Union’s Horizon 2020 research and innovation program under grant agreement No 863410. MB receives support from the ELIXIR European bioinformatics organization for work related to the development of the GA4GH beacon protocol. Conflict of Interest: None declared. § DATA AVAILABLITY The data underlying this article are available at <https://github.com/progenetix/cancercelllines-web> and <https://pubmed.ncbi.nlm.nih.gov/>. plainnat
http://arxiv.org/abs/2307.05385v1
20230706181158
Learned Kernels for Interpretable and Efficient PPG Signal Quality Assessment and Artifact Segmentation
[ "Sully F. Chen", "Zhicheng Guo", "Cheng Ding", "Xiao Hu", "Cynthia Rudin" ]
eess.SP
[ "eess.SP", "cs.AI", "cs.LG" ]
That’s BAD: Blind Anomaly Detection by Implicit Local Feature Clustering Jie Zhang^1     Masanori Suganuma^1     Takayuki Okatani^1,2 ^1Graduate School of Information Sciences, Tohoku University      ^2RIKEN Center for AIP {jzhang,suganuma,okatani}@vision.is.tohoku.ac.jp =================================================================================================================================================================================================================== Photoplethysmography (PPG) provides a low-cost, non-invasive method to continuously monitor various cardiovascular parameters. PPG signals are generated by wearable devices and frequently contain large artifacts caused by external factors, such as motion of the human subject. In order to ensure robust and accurate extraction of physiological parameters, corrupted areas of the signal need to be identified and handled appropriately. Previous methodology relied either on handcrafted feature detectors or signal metrics which yield sub-optimal performance, or relied on machine learning techniques such as deep neural networks (DNN) which lack interpretability and are computationally and memory intensive. In this work, we present a novel method to learn a small set of interpretable convolutional kernels that has performance similar to – and often better than – the state-of-the-art DNN approach with several orders of magnitude fewer parameters. This work allows for efficient, robust, and interpretable signal quality assessment and artifact segmentation on low-power devices. § INTRODUCTION Photoplethysmography (PPG) is a non-invasive optical technique to measure blood volume changes in tissue by measuring changes in light absorption. It is commonly used to infer various cardiovascular parameters, such as blood oxygenation, heart rate, heart rate variability, and other related parameters <cit.>. It has become abundant in wearable consumer devices (i.e., smart watches) and there is a continuing effort to develop algorithms that extract meaningful data from PPG signals obtained from these devices. A notable limitation of wearable PPG devices is their sensitivity to motion-induced artifacts, which leads to corruption in the collected signals. This challenge is further exacerbated by the ambulatory nature of most users, and the PPG signals are collected during daily activities involving movement. Motion artifacts often harm measurement quality and it requires considerable effort to either identify and discard noisy data or reconstruct these corrupted segments. Specifically, the identification of motion-induced artifacts in PPG signals is surprisingly non-trivial. Previous methods relied on either additional sensors (e.g., accelerometers) <cit.>, statistical methods such as standard deviation, skew, kurtosis, wavelet-based motion artifact reduction algorithms <cit.> or more sophisticated handcrafted feature detectors <cit.>, machine-learning techniques such as frequency-domain feature extraction and an ensemble of decision trees <cit.>, or, more recently, computationally expensive deep neural networks <cit.>. The aforementioned hand-crafted feature detectors and statistical methods provide clear indications and reasoning for why a segment is identified as an artifact, but often underperform compared to other methods, such as deep neural networks <cit.>. However, deep neural nets contain millions of parameters, which require a significant amount of memory and computational resources, thus, making them incompatible with small, low-power wearable devices. Furthermore, deep neural networks lack interpretability; it is difficult, if not impossible, to determine a rationale behind the identification of an artifact in the PPG signal. For the same reason, they are difficult to troubleshoot. In this work, we merge the best of all worlds: performance, interpetability, and computational efficiency. Specifically, we obtain state-of-the-art (SotA) results in artifact segmentation, and we have interpretability, all in a model with several orders of magnitude fewer parameters than the current SotA. We accomplish this by learning a set of convolution kernels that, when applied to the PPG signal and summed following a floor function, directly produces a measure of PPG signal quality. This measure can then be thresholded to segment and identify artifacts. Notably, this approach differs from deep convolutional neural networks in that there are no “layers” – only a single set of convolutions is applied to the signal. Furthermore, the mixing is “nearly linear,” in the sense that the only non-linearity is a floor function setting negative values to zero. Thus, we can directly and precisely measure the contribution of any individual convolution to the predicted signal quality, and positive contributions are directly proportional to the output signal. Furthermore, our approach is similar to feature-detection-based approaches in that we learn a set of kernels that mimic handcrafted features. In our approach, for our smallest model, we learn so few kernels (12 total) that we can inspect these features by eye and observe the waveforms learned by our approach with a quick glance. This contrasts with deep neural networks, where there are so many parameters one cannot feasibly inspect the inner workings of the model, and even if one took the time to organize these features, it would still be unclear how each feature individually contributes to the output due to the built-in non-linearity. Additionally, our model outputs a real-valued signal that can be interpreted as a likelihood that the signal is an artifact, yielding a continuous value of signal quality. In our work, we threshold this value to segment artifacts, but one could conceivably use this signal to segment PPG signals into gradations of quality. § RELATED WORK §.§ Utilization of Extra Sensors Some approaches for motion artifact detection in PPG signals involve using additional sensors to provide supplementary information about the wearer's activity or motion. Several studies use accelerometers to detect artifacts <cit.>, or even a second PPG sensor <cit.>. By analyzing the data from these sensors, it is possible to identify periods of high motion and correlate them with artifacts in the PPG signal. The fusion of data from multiple sensors can improve the performance of artifact detection algorithms by providing a more complete picture of the wearer's activity. Although this approach can improve the performance of motion artifact detection, it also increases the complexity of the system and may require additional power and processing resources. Furthermore, these studies primarily focus on motion artifacts and cannot detect other sources of noise, such as noise introduced by external light sources, signal interference, or when sensors have poor skin contact. §.§ Statistical Techniques Statistical and machine learning techniques involve applying various algorithms to extract features and build classifiers for detecting motion artifacts in PPG signals. Several studies have used least-squares-based methods, such as X-LMS <cit.> or adaptive filters <cit.>. Other methods have used statistical parameters like kurtosis or Shannon entropy to detect artifacts <cit.>. Additionally, machine learning techniques, such as support-vector machines, random forests, and naïve Bayes have also been employed <cit.>. These methods are desirable as they are simple, explainable, and low-compute. However, they often under-perform compared to deep learning-based approaches and typically focus on classification of discrete time-chunks rather than segmentation. §.§ Deep Learning Approaches Deep learning approaches have gained popularity in recent years due to their ability to automatically learn features and patterns from large datasets. Methods such as 1D convolutional neural networks <cit.>, 2D convolutional neural networks <cit.>, and U-Net type architectures <cit.> have been implemented with excellent results. These methods can achieve high performance but at the cost of increased computational complexity and lack of interpretability. Furthermore, many of these architectures, such as 1D and 2D convolutional neural networks are not inherently designed for segmentation tasks, and must be adapted to segmentation tasks via methods like GradCAM <cit.> or SHAP <cit.>, which adds another layer of complexity, and these methods are not always reliable. An excellent review of artifact-removal techniques has been provided by Pollreisz & TaheriNejad for further reference <cit.>. § DATASETS We evaluate our approach against the same datasets used by the current state-of-the-art algorithms <cit.>, namely PPG-DaLiA <cit.>, WESAD <cit.>, and TROIKA <cit.>. PPG-DaLiA is a multimodal dataset composed of recordings from 15 subjects performing real-world tasks. The data include electrocardiogram (ECG), accelerometer, and electrodermal recordings for various settings/activities, such as sitting still, walking up/down stairs, playing sports (soccer, cycling, walking), and driving. The WESAD data was recorded from wrist and chest PPGs from 15 subjects aged 21-55 (median 28) primarily focusing on varying the emotional state of the subject (neutral, stressed, amused). ECG, accelerometer, electrodermal, and wrist PPG signals were recorded. Lastly, the TROIKA data was recorded from subjects running on a treadmill (ages 18-35). Accelerometer, ECG, and PPG signals were recorded. This dataset represents the most challenging dataset for generalization since the subjects were under intense physical demands with poor signal quality owing to the large amounts of movement. We follow the same preprocessing procedure as Guo et al. <cit.>, namely applying a bandpass filter with cutoffs of 0.9 Hz and 5 Hz, splitting the data into 30-second chunks, and resampling to 64 Hz. However, instead of normalizing the chunks between [0, 1] as Guo et al. did, we instead opt to normalize each chunk to a unit normal distribution. We used Guo et al.'s publicly available artifact segmentation labels, which were created via a web-annotation tool and a group of skilled annotators <cit.>. § METHODS §.§ Overview In this study, we framed the task of PPG quality classification as a feature-detection task. We believed that clean PPG signals could be identified by certain common features, and PPG signals with artifacts could be identified by either the lack of these recurring features or by features shared by artifacts caused by factors such as movement. Our PPG signals are one-dimensional time-series signals, and we learned a set of kernels to identify these features. To achieve this, we learn a set of M convolution kernels of varying numbers to evaluate parameter scaling laws: M = 12 (small), 72 (medium), 384 (large). The M kernels are divided into sets of M/3 kernels of short (1.0 seconds), moderate (1.5 seconds), and long (3.0 seconds) sizes, with the goal of learning features of different lengths to improve classification quality. The kernels are convolved with the input PPG signal and the values are floored via max(0, x) to obtain a non-negative signal quality indicator. Additionally, a bias is added to each convolution channel. We compute a weighted sum of the convolutions to produce a single channel that represents a measure of signal quality, which is then transformed to range [0.0, 1.0] via the sigmoid function. We apply a 3rd-order Savitzky-Golay filter (∼0.8-second filter window) to smooth the output, then apply thresholding at a value of 0.5 to produce a binary output segmentation. This pipeline is shown in Figure <ref>. More formally, our model consists of M kernels, M scalar biases, and M scalar weights. Given an input signal, 𝐱, our learned kernels model (LK) is defined as follows: LK(𝐱) = σ(∑^M_m=0max(0, 𝐱*𝐤_𝐦 + b_m) · w_m ) where k_m, b_m, w_m are the m^th convolution kernel, bias, and signal weight respectively, * represents the convolution operation, · is scalar-vector multiplication, + is the addition of a single scalar to all elements of a vector, and σ is the element-wise sigmoid function, (1+e^-𝐱)^-1. In essence, our model is equivalent to a linear combination of filtered signals generated by convolving the input with a learnable kernel, mapped to range (0, 1). For the segmentation task, the output can be further post-processed via smoothing and thresholding to yield a segmentation map. §.§ Data Pre-processing Data was divided into 30-second chunks (1920 data points at 64 Hz) and normalized at the chunk level by subtracting the mean and dividing by the standard deviation. Normalization was performed at the chunk-level rather than the dataset-level to ensure that signal intensities remained relatively consistent regardless of the chunk being processed, improving generalization to distributions of PPG signals that may have different global parameters or greater inter-sample variability. §.§ Training To learn the kernels, we used stochastic gradient descent with Adam optimization <cit.> (β_1=0.9, β_2 = 0.999, ϵ=10^-8, weight decay =10^-4), and a linear learning rate decay schedule (decaying from 0.01 to 0.002 learning rate on the final iteration), with a binary cross-entropy error objective function. Since our dataset is on the order of 10^8 bytes and our model sizes are on the order of 10^3-10^5 bytes, we can compute the gradient over the entire dataset in a single pass to obtain the true gradient (as opposed to an estimate via a mini-batch). For the largest model sizes, we use gradient accumulation due to memory constraints. Thus, we train for 512 iterations computing the gradient over the entire training set. We find that this yields empirically better performance than stochastic mini-batches. We also opt for such an extreme coverage of the training set (in terms of epoch count), as we find it empirically difficult to overfit the training data given our parameter count and iteration count, especially at smaller kernel numbers. §.§ Parameter Reduction We take several steps to further reduce parameter count to optimize the compute-to-performance ratio. These steps take place after the training procedure outlined in Figure <ref> and are used for deployment. §.§.§ Weight Absorption We can reduce the parameter count of our model by absorbing the weighting factor into the kernels themselves, due to the “almost linear” property of the model. Let x be the input PPG signal, which is a vector of length 1920. Let k_m, b_m, w_m be the m^th convolution kernel, bias, and signal weight respectively. The kernels are vectors of length 192, 96, or 64 depending on the length class (long, moderate, or short), while the biases and weights are scalars. We have: LK(𝐱) = ∑^M_m=0max(0, 𝐱*𝐤_𝐦 + b_m) · w_m = ∑^M_m=0max(0, (𝐱*𝐤_𝐦 + b_m) · |w_m|) ·sgn(w_m) = ∑^M_m=0max(0, (𝐱*𝐤_𝐦)·|w_m| + b|w_m|) ·sgn(w_m) = ∑^M_m=0max(0, [𝐱*(𝐤_𝐦·|w_m|)] + b|w_m|) ·sgn(w_m), |· | is an element-wise absolute value. The quantity k·|w_m| becomes the new kernel, and the quantity b|w_m| becomes the new bias. We reduce parameter count as we can now store the kernels themselves along with a corresponding binary value indicating whether they are positive or negative kernels. Furthermore, each kernel can be neatly categorized into “contributing to good signal quality” (positive kernels) or “contributing to poor signal quality” (negative kernels). This process is performed after learning the kernels, with the goal to reduce the memory footprint and number of scalar operations for deployed models. §.§.§ Pruning Driven by the observation that some learned kernels look quite similar, we hypothesized that we could prune similar kernels while maintaining good performance. To measure the similarity between kernels, we use the cosine similarity metric. Given two vectors v_1 and v_2, the cosine similarity is defined as: similarity(v_1, v_2) = v_1 · v_2/v_1_2v_2_2. Cosine similarity measures the cosine of the angle between two vectors, resulting in a value between -1 and 1. A value of 1 indicates that the vectors are identical, while a value of -1 indicates that they are completely dissimilar. Our kernel pruning method consists of the following steps: * For each convolutional layer, compute the cosine similarity between all pairs of kernels. * Sort the kernel pairs by their cosine similarity (in order of descending similarity), and select the top pa pairs to prune, where pa is a hyperparameter. * For each selected pair to prune, determine which of the two kernels to remove and which to keep based on their effective contribution to the signal (described below). To determine which kernel to prune, we compute the “effective contribution” of the kernel as: eff_j = w_j · m_j, eff_k = w_k · m_k where w_j and w_k are the weights of the two kernels in a pair and m_j and m_k are the mean absolute values of the kernels w_j and w_k, respectively. We keep the kernel with the higher effective weight and prune the other kernel. When pruning a kernel, we need to update the weight and biases of the remaining kernel to correct for the removed kernel's contribution. Without loss of generality, assume that eff_j > eff_k. We update the weight and bias of the remaining kernel i as follows: w_j' = eff_j + eff_k/m_j, b_j' = b_j + b_k. where b_j and b_k are the biases for weight w_j and w_k respectively. § EXPERIMENTS The following experiments aim to compare our methodology to several common baselines, as well as a state-of-the-art multi-million parameter architecture based on U-Net. We show that our segmentation algorithm matches or exceeds the state-of-the-art in several benchmark PPG tasks. §.§ Evaluation To evaluate the performance of the baselines and learned kernels for artifact detection, we used the DICE score, defined as DICE(A, B) = 2|A ∩ B|/|A| + |B| where A and B are the binary segmentation maps of the predicted and ground truth, respectively, and |·| denotes the cardinality of a set. We employed this score on a test set of PPG signals, with each 30-second chunk of PPG signal normalized via the method described earlier. We conducted 10-fold cross-validation using 10 different models trained from different starting seeds. §.§ Baselines We compare to the same baselines tested in Guo et al. <cit.>. Namely, a convolutional neural network sliding window approach, a template-matching approach, and a ResNet-34-based classifier with segmentation performed via GradCAM <cit.> or SHAP <cit.>. §.§.§ Convolutional Sliding Window Our first baseline is a 1D-convolutional network trained to classify 3-second windows of PPG signal as either “artifact” or “clean.” The network consists simply of 3 blocks of convolution-ReLU-BatchNorm-MaxPool. The 3 blocks consist of kernel sizes 10, 5, and 3, and the channel numbers are 64, 64, and 128. The model was trained from 5000 three-second windows that were randomly selected from the PPG-DaLiA training set. Training consisted of 200 epochs of minimizing cross-entropy loss with Adam. Testing was done by segmenting the 30-second PPG signal into 3 second chunks with the trained convolutional neural network classifier, and a sliding window with a step size of 1 second was used for testing. Since the trained convolutional network has an input size of 3-second chunks, each second of PPG signal receives 3 classification outputs. A 1-second chunk of PPG signal is classified as an artifact if any of the three classification outputs are classified as an artifact. §.§.§ Template-Matching We follow the same approach as Guo et al. <cit.>, which was inspired by Lim et al.<cit.>. Signals were first divided from the PPG-DaLiA training set into separate pulses via peak detection. 10 clean (artifact-free) pulses were then selected to serve as our standard templates for comparison with test pulses. Each pulse from the test data was compared to these 10 templates, using the dynamic time warping (DTW) distance metric to measure similarity. The smallest DTW distance (out of the 10 comparisons with the templates) was then identified. The range of the DTW distance function is [0, ∞), but empirically we find that the DTW distances between our templates and the PPG signals fall mostly within the range [0, 10]. We created a binary predicted label by labeling a segment as an artifact if the minimum DTW distance is at least 1. Other samples were given “non-artifact” predicted labels. The threshold of DTW was chosen by testing thresholds between 0 and 10 over the entire train set and choosing the threshold that produced the highest DICE score (which was 1). To summarize, if the minimum DTW distance α exceeds our threshold, we classify the entire pulse (and all its timesteps) as an artifact. §.§.§ GradCAM & SHAP In this baseline experiment, the Resnet-34 architecture proposed by Dai et al (2016) was employed for 1D binary classification (identifying `clean' or `artifact'). This architecture, traditionally used for image classification, was repurposed for time series analysis via GradCAM <cit.>. Considering the relatively small size of the training dataset, transfer learning was performed on a pre-trained Resnet34-1D PPG signal quality classifier, as described by Zhang et al (2021) <cit.>. This pre-trained model was originally trained on the UCSF PPG dataset according to Pereira et al (2019) <cit.>. The last two residual blocks, global average pooling, and the final dense layer were subsequently retrained by Guo et al. <cit.>. To train a classification model, ground truth labels were created such that any signal that contained artifact timesteps was labeled as an artifact, while all other signals were labeled as non-artifacts. After this labeling process, the training set consisted of 175 clean signals and 3261 artifact signals. The model was trained using the Adam optimizer <cit.>, using the binary cross-entropy loss. An initial learning rate of 10^-5 was set, which was scheduled to reduce to 5 × 10^-6 after 10 epochs, and further decrease to 1 × 10^-6 after 50 epochs. The maximum number of training epochs was capped at 100. While the Resnet34-1D network is fundamentally a classifier, our goal is instead to obtain segmentation labels. Assuming that the model would focus on artifacts when tasked with predicting an artifact signal, we employed two post-hoc explanatory methods to generate segmentation masks: SHAP <cit.> and GradCAM <cit.>. SHAP is a unified measure of feature importance that assigns each feature an importance value for a particular prediction. It is based on game theory and computes Shapley values. The Shapley value of a feature represents the average marginal contribution of that feature across all possible feature subsets. GradCAM utilizes the gradients of any target concept flowing into the final convolutional layer of a CNN to produce a coarse localization map highlighting the important regions in the image (or in this case, a time series) for predicting the concept. This is done by first computing the gradient of the output category with respect to feature maps of the last convolutional layer, then applying a global average pooling over the gradients to obtain weights for the feature maps. These weights are multiplied with the feature maps and summed over all maps to generate the final output. This output can be used as a heatmap, indicating which parts of the input were important in making the model's prediction. GradCAM can be applied to any CNN-based network without requiring architectural changes or re-training. For the SHAP values, the computed values were first smoothed with a Gaussian filter. A binary segmentation mask was then created by choosing the class with the higher SHAP value at each timestep. §.§ Segade Finally, we compare to the current state-of-the-art, Segmentation-based Artifact Detection (Segade) by Guo et al. <cit.>. Segade is a 1D segmentation network similar in architecture to UNet, with a few changes; notably, the convolutions are replaced with residual convolutions for better gradient propagation and the convolutions are one-dimensional as the data type is one-dimensional. We refer to <cit.> for a detailed discussion of the architecture. § RESULTS §.§ Test Set Performance We find that our largest model significantly exceeds the performance (measured in DICE score) of all baselines except the current state-of-the-art (Segade), for which it achieves >99% of its DICE-score on DaLiA and WESAD, with less than 2% the parameter count of the state-of-the-art model. Our largest model even manages to significantly outperform all other baselines, including Segade, on the most challenging and least in-distribution dataset (TROIKA). Our medium model, at 0.4% of the parameter count of Segade, achieves at least 98% of Segade's DICE score on all datasets and matches Segade's performance on TROIKA. Lastly, our smallest model, at 0.06% of the parameter count of Segade, reaches 94% of the DICE score of Segade on DaLiA and WESAD, though struggles on TROIKA, reaching 86% of the DICE score of Segade. Detailed results and parameter counts are in Table <ref>. §.§ Scaling Laws In this section, we explore the scaling behavior of our approach by gradually increasing the number of kernels and evaluating the performance on the test set. Our main objective is to understand the relationship between the parameter count and test-set performance, which will help us identify the potential limits and robustness of our approach. To investigate the scaling behavior, we first conduct a series of experiments with different numbers of kernels and measure the corresponding test-set performance using the DICE score. As anticipated, we observe an asymptotic relationship between the parameter count and the test-set performance, as shown in Figure <ref>. Our analysis reveals that the test-set performance plateaus and remains roughly monotonically increasing as the parameter count increases, without exhibiting signs of overfitting. This finding suggests that our approach is robust even at extremely high parameter counts. In summary, our exploration of scaling laws demonstrates an asymptotic relationship between parameter count and test-set performance, providing insights into the limits and robustness of our approach. The absence of overfitting at high parameter counts attests to the stability of our architecture. We note that it is possible that a theoretical performance limit exists due to mislabelings in the test set; this is discussed further in the Discussion section. §.§ Interpretability A considerable benefit of our approach is clarity of the signal processing of the PPG, unlike in traditional deep neural networks. Since our model consists of a single layer of convolutions of varying kernel sizes, we can simply peer inside the model and determine how much each set of convolutions (and even each individual kernel) contributes to the overall classification signal. To begin our exploration, we look to quantify the effect each kernel has on the output signal when presented with a perfect match to its signal. To compute the importance of the m^th kernel, we calculate the following expression: kernel importance_m = (∑_kk^2_m,k + b_m ) ·w_m where ∑_kk^2_m,k is the sum of each squared component of the m^th kernel, b_m is the m^th kernel's bias, and w_m is the m^th kernel's weight. This, by definition, computes the value added to the final signal by that kernel when the kernel perfectly overlaps with an exact template match. Furthermore, we also note that in the case of a normalized kernel and bias (i.e. ∑_kk^2_m,k + b_m = 1), the kernel importance directly equals the weighting of that kernel. We run this kernel analysis across all 10 cross-validation models for the small, medium, and large model sizes. We also compute and record the output values of each kernel group over the whole test dataset. Essentially, we perform the convolution and weighting operation for each set of kernels grouped by size (long, moderate, or short), along the entire test dataset, concatenating the output values to one array. We further segregate the output values depending on whether they were generated by PPG signals labeled as artifact or clean. In the framework of our model, this is equivalent to computing: ∑_𝐤, b, w ∈𝐊, B, Wmax(0, 𝐱 * 𝐤 + b) · w where 𝐊, B, and W are the set of kernels, biases, and weights associated with the kernel group in question (i.e., the long, moderate, or short kernels). We notice a clear pattern between kernel size and kernel importance. We find that for the medium and large models, the long kernels have significantly more positive (p < 10^-13) mean importance than the short and moderate kernels, with significance computed via two-tailed independent samples t-tests. For the small model, the long and moderate kernels have a mean importance significantly more positive than the small kernels (p < 10^-6) (see Figure <ref>). These results indicate that long kernels generally learn to recognize poor-quality signal features, while medium and short kernels learn to recognize clean PPG features. We find this to be empirically true, with long kernels producing positive values most of the time, moderate kernels producing mostly negative or mixed positive/negative values (as seen in the medium-sized model), and short kernels producing mostly negative values (Figure <ref>). We demonstrate one such 30-second example of this in Figure <ref>. The agreement between our theoretical exploration of kernel importance and our empirical exploration of the values produced by various kernel groups over the test sets validates the interpretability of our model. These results are shown in Figure <ref> by comparing the top and bottom rows. Next, we note that because our smallest model consists of only 12 kernels, we can easily visually inspect each kernel. They are all shown in Figure <ref>. This is in stark contrast to compared to most deep-learning models, where the parameters are typically so numerous that it is intractable to manually inspect each parameter; here, all of them can be shown on one page. Furthermore, parameters in deep neural networks are typically uninterpretable, whereas our model represents a template-matching paradigm with a waveform. §.§ Pruning Experiments We find that applying our pruning procedure on long kernels provides the greatest parameter reduction with the least performance trade-off. Pruning moderate and/or short kernels impacts performance more heavily and ultimately does not prune as many parameters as pruning long kernels, since each long kernel contains more parameters by definition. Ultimately, the choice of prune-to-performance ratio is up to the user and specific use case, though we provide an example pruning scheme in Table <ref> which attempts to maintain performance around 96% of the mean original performance. Notably, as the model size increases, we are able to remove progressively more parameters with less of a decrease in performance. We hypothesize this may be due to increased capacity of larger models and perhaps greater learned redundancy. Furthermore, because our model is a well-behaved “almost linear” system, model weights can be naïvely quantized with virtually no loss in performance. In fact, a simple cast from float32 to float16 yielded indistinguishable results. In contrast, the quantization process for a deep neural network is non-trivial, as error accumulates and propagates through the network. §.§ Data Limitations The extrapolated performance ceiling for our approach prompted an inspection of the test set. We computed the accuracy of our approach on each 30-second sample in the test set, then sorted the test set by accuracy. Upon inspection of the test set samples on which our approach performed the poorest, we find that the source of this disagreement corresponds to suboptimal labeling rather than inaccurate classification by our approach. In other words, our method was correct and the labels were wrong. Several such inaccuracies are shown in Figure <ref>. Many of these inaccuracies in labeling seem to stem from PPG signals that have abnormally low amplitude compared to other signals in the dataset. Although this is unusual, we do not consider it an artifact, since the PPG signal retains its structure and signal-to-noise ratio. Furthermore, a PPG signal could have diminished amplitude due to factors like light travel distance, the brightness of the source light, lessened perfusion of the appendage (e.g., if the appendage is cold), etc. § DISCUSSION We presented a novel, simple, and interpretable method for signal quality assessment using learned convolutions. Our approach achieved near state-of-the-art performance on three benchmark datasets with several orders of magnitude fewer parameters than deep learning methods and showing better generalization to out-of-distribution data, as evident by the markedly improved performance in the most out-of-distribution task, the TROIKA dataset. This method's simplicity and interpretability allow for the inspection of the model's inner workings, providing insights into the contribution of each kernel to the overall signal quality assessment. Furthermore, our method demonstrates extremely low compute and memory demands, setting it apart from deep-learning-based approaches. This enables high-quality signal assessment on low-power wearable devices. Our results demonstrated that our learned kernels can effectively capture relevant features for identifying good and poor-quality PPG signals. The scaling behavior of our approach showed a power-law relationship between parameter count and test-set performance. Our analysis of the kernel importance revealed that larger kernels primarily contribute to the identification of poor-quality signal features, while smaller and medium-sized kernels capture normal PPG features. Importantly, our work demonstrates that for some complex tasks, simple and interpretable models can perform just as well as deep neural networks and are even preferable. Furthermore, we successfully applied parameter reduction techniques to improve the compute-to-performance ratio, keeping in mind the low-power limitations of wearable devices. We demonstrated that we can remove over 20% of parameters in large models and maintain above 97% of the original performance. We also demonstrated that, owing to the simplicity and near linearity of our model, we can naïvely quantize the model weights with no impact on performance. This is in stark contrast to deep neural networks, where the chaotic properties of the system can amplify and propagate error due to quantization, leading to performance degradation. Quantization is also often a non-trivial process in deep neural networks as a result of dynamic range issues, again owing to the chaotic nature of neural networks. Our proposed method suffers from none of the aforementioned complexities. All in all, our largest model can run in under 100 KB of memory when cast to float16 and can run in under 75 KB after weight pruning. Our smallest model runs using under 3 KB of memory cast to float16. Computing a raw (i.e., without post-process smoothing) signal score is extremely computationally efficient, with our largest model requiring on the order of 10 MFlops/second of signal processing. This memory-efficient and compute-efficient approach means that designers of low-power, wearable devices can now achieve state-of-the-art PPG signal quality assessment on extremely stringent compute budgets. This starkly contrasts with deep learning methods which are memory and compute-intensive. Additionally, implementation of our architecture is simple and straightforward to adapt to lower-level programming languages, whereas deep learning-based methods require the time-intensive re-implementation of complex deep learning operations into lower-level devices. We also highlighted potential limitations in the datasets used for evaluation, as some labeling inaccuracies and discrepancies were observed in the test set samples with the poorest performance. This suggests that the practical maximum achievable performance for PPG artifact detection on these test sets may be limited by the quality of the available ground truth labels. In conclusion, our method provides a robust, interpretable, and efficient approach for PPG signal quality assessment, making it competitive with deep learning approaches and suitable for deployment on low-power devices. Future work could focus on investigating the patterns observed in the kernel importances, further optimizing the model for improved performance, and addressing potential limitations in the training data to ensure more accurate and reliable results. Additionally, the application of our method to other physiological signals and tasks could be explored, as well as the integration of our approach with other wearable devices for real-time signal quality assessment. 9 ray2021 Daniel Ray, Tim Collins, Sandra Woolley, et al. “A Review of Wearable Multi-wavelength Photoplethysmography”. IEEE Reviews in Biomedical Engineering, 2021. hoogantink2021 Christoph Hoog Antink, Yen Mai, Mikko Peltokangas, et al. “Accuracy of heart rate variability estimated with reflective wrist-PPG in elderly vascular patients”. Scientific Reports, vol. 11, no. 1, 2021. ding2021 Xiaorong Ding, Wenjin Wang, Yifan Chen, et al. “Feasibility Study of Pulse Width at Half Amplitude of Camera PPG for Contactless Blood Pressure Estimation”. In Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), 2021. spierer2015 David K. Spierer, Zohn Rosen, Leib L. Litman, et al. “Validation of photoplethysmography as a method to detect heart rate during rest and exercise”. Journal of Medical Engineering & Technology, vol. 39, no. 5, 2015. nabeel2017 P. M. Nabeel, J. Jayaraj, S. Mohanasankar. “Single-source PPG-based local pulse wave velocity measurement: a potential cuffless blood pressure estimation technique”. Physiological Measurement, vol. 38, no. 12, 2017. nilsson2013 Lena M. Nilsson. “Respiration signals from photoplethysmography”. Anesthesia and Analgesia, vol. 117, no. 4, 2013. castaneda D. Castaneda, A. Esparza, M. Ghamari, C. Soltanpur, and H. Nazeran. “A review on wearable photoplethysmography sensors and their potential future applications in health care”. Int J Biosens Bioelectron, 2018. accel_PPG M. Pflugradt and R. Orglmeister. “Improved signal quality indication for photoplethysmographic signals incorporating motion artifact detection”. 2014 36th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Chicago, IL, USA, 2014, pp. 1872-1875 handcrafted_1 S. Cherif, D. Pastor, Q. -T. Nguyen and E. L'Her. “Detection of artifacts on photoplethysmography signals using random distortion testing”. 2016 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Orlando, FL, USA, 2016, pp. 6214-6217. handcrafted_2 J. X. Sun, A. T. Reisner and R. G. Mark. “A signal abnormality index for arterial blood pressure waveforms”. 2006 Computers in Cardiology, Valencia, Spain, 2006, pp. 13-16. ML_1 E. Lutin, D. Biswas, N. Simoes-Capela, C. Van Hoof, and N. Van Helleputte. “Learning based quality indicator aiding heart rate estimation in wrist-worn PPG”. Annu Int Conf IEEE Eng Med Biol Soc, November 2021, pp. 7063-7067. stat_1 A. M. Tăuţan, A. Young, E. Wentink, and F. Wieringa. “Characterization and reduction of motion artifacts in photoplethysmographic signals from a wrist-worn device”. Annu Int Conf IEEE Eng Med Biol Soc, 2015, pp. 6146-6149. stat_2 M. Elgendi. “Optimal Signal Quality Index for Photoplethysmogram Signals”. Bioengineering (Basel), vol. 3, no. 4, pp. 21, Sep. 2016. DL_1 J. Chen, K. Sun, Y. Sun and X. Li. “Signal Quality Assessment of PPG Signals using STFT Time-Frequency Spectra and Deep Learning Approaches”. 2021 43rd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Mexico, 2021, pp. 1153-1156. DL_2 T. Edinburgh, P. Smielewski, M. Czosnyka, M. Cabeleira, S.J. Eglen, and A. Ercole. “DeepClean: Self-Supervised Artefact Rejection for Intensive Care Waveform Data Using Deep Generative Learning”. In: B. Depreitere, G. Meyfroidt, and F. Güiza (eds) Intracranial Pressure and Neuromonitoring XVII. Acta Neurochirurgica Supplement, vol 131. Springer, Cham, 2021. bashar S. K. Bashar, D. Han, S. Hajeb-Mohammadalipour, E. Ding, C. Whitcomb, D. D. McManus & K. H. Chon. “Atrial Fibrillation Detection from Wrist Photoplethysmography Signals Using Smartwatches”. Sci Rep, vol. 9, 15054, 2019. wood L. B. Wood and H. H. Asada, “Noise Cancellation Model Validation for Reduced Motion Artifact Wearable PPG Sensors Using MEMS Accelerometers”. 2006 International Conference of the IEEE Engineering in Medicine and Biology Society, New York, NY, USA, 2006, pp. 3525-3528. puranik S. Puranik and A. W. Morales. “Heart Rate Estimation of PPG Signals With Simultaneous Accelerometry Using Adaptive Neural Network Filtering”. in IEEE Transactions on Consumer Electronics, vol. 66, no. 1, pp. 69-76, Feb. 2020. hara S. Hara et al, “Parameter optimization of motion artifact canceling PPG-based heart rate sensor by means of cross validation”. 2017 11th International Symposium on Medical Information and Communication Technology (ISMICT), pp. 73-76, 2017. tanweer K. T. Tanweer, S. R. Hasan and A. M. Kamboh. “Motion artifact reduction from PPG signals during intense exercise using filtered X-LMS”. 2017 IEEE International Symposium on Circuits and Systems (ISCAS), Baltimore, MD, USA, 2017. wu C. -C. Wu, I. -W. Chen and W. -C. Fang. “An implementation of motion artifacts elimination for PPG signal processing based on recursive least squares adaptive filter”. 2017 IEEE Biomedical Circuits and Systems Conference (BioCAS), Turin, Italy, 2017, pp. 1-4. hanyu S. Hanyu and C. Xiaohui. “Motion artifact detection and reduction in PPG signals based on statistics analysis”. 2017 29th Chinese Control And Decision Conference (CCDC), Chongqing, China, 2017, pp. 3114-3119. selvaraj N. Selvaraj, Y. Mendelson, K. H. Shelley, D. G. Silverman and K. H. Chon. “Statistical approach for the detection of motion/noise artifacts in Photoplethysmogram”. 2011 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Boston, MA, USA, 2011, pp. 4972-4975. lin W. -J. Lin and H. -P. Ma. “A physiological information extraction method based on wearable PPG sensors with motion artifact removal”. 2016 IEEE International Conference on Communications (ICC), Kuala Lumpur, Malaysia, 2016, pp. 1-6. athaya T. Athaya and S. Choi. “An Efficient Fingertip Photoplethysmographic Signal Artifact Detection Method: A Machine Learning Approach”. Journal of Sensors, vol. 2021, Article ID 9925033, 18 pages, 2021. goh C. -H. Goh, L. K. Tan, N. H. Lovell, S. -C. Ng, M. P. Tan, and E. Lim. “Robust PPG motion artifact detection using a 1-D convolution neural network”. Computer Methods and Programs in Biomedicine, Volume 196, 2020, 105596. liu X. Liu, Q. Hu, H. Yuan and C. Yang. “Motion Artifact Detection in PPG Signals Based on Gramian Angular Field and 2-D-CNN”. 2020 13th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI), Chengdu, China, 2020, pp. 743-747. sota Zhicheng Guo, Cheng Ding, Xiao Hu, and Cynthia Rudin. “A supervised machine learning semantic segmentation approach for detecting artifacts in plethysmography signals from wearables”. Physiological Measurement, vol. 42, no. 12, p. 125003, 2021. gradcam R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh and D. Batra. “Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization”. 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 2017, pp. 618-626. shap H. Chen, S. M. Lundberg, and S. I. Lee. “Explaining a series of models by propagating Shapley values”. Nat Commun, 13, 4512 (2022). pollreisz D. Pollreisz and N. TaheriNejad. “Detection and Removal of Motion Artifacts in PPG Signals”. Mobile Netw Appl, vol. 27, pp. 728-738, 2022. dalia A. Reiss, I. Indlekofer, P. Schmidt, and K. Van Laerhoven. “Deep PPG: Large-Scale Heart Rate Estimation with Convolutional Neural Networks”. Sensors (Basel), vol. 19, no. 14, pp. 3079, Jul. 2019. wesad Philip Schmidt, Attila Reiss, Robert Duerichen, Claus Marberger, and Kristof Van Laerhoven. “Introducing WESAD, a Multimodal Dataset for Wearable Stress and Affect Detection”. In Proceedings of the 20th ACM International Conference on Multimodal Interaction (ICMI '18), New York, NY, USA, 2018, pp. 400-408. troika Z. Zhang, Z. Pi and B. Liu. “TROIKA: A General Framework for Heart Rate Monitoring Using Wrist-Type Photoplethysmographic Signals During Intensive Physical Exercise”. IEEE Transactions on Biomedical Engineering, vol. 62, no. 2, pp. 522-531, Feb. 2015. lim P. K. Lim, S. C. Ng, N. H. Lovell, Y. P. Yu, M. P. Tan, D. McCombie, E. Lim, and S. J. Redmond. “Adaptive template matching of photoplethysmogram pulses to detect motion artefact”. Physiol. Meas., vol. 39, no. 10, 105005, 2018. kingma D. Kingma and J. Ba. “Adam: A Method for Stochastic Optimization”. Proceedings of the 3rd International Conference on Learning Representations (ICLR 2015). zhang O. Zhang, C. Ding, T. Pereira, R. Xiao, K. Gadhoumi, K. Meisel, R. J. Lee, Y. Chen and X. Hu. “Explainability metrics of deep convolutional networks for photoplethysmography quality assessment”. IEEE Access, vol. 9, 2021, pp. 29736-29745. pereira T. Pereira, N. Tran, K. Gadhoumi, M. M. Pelter, D. H. Do, R. J. Lee, R. Colorado, K. Meisel, and X. Hu. “Photoplethysmography based atrial fibrillation detection: a review”. Npj Digital Med., vol. 3, no. 3, 2020.
http://arxiv.org/abs/2307.02700v1
20230706002341
Generation of robust temporal soliton trains by the multiple-temporal-compression (MTC) method
[ "André C. A. Siqueira", "Guillermo Palacios", "Albert S. Reyna", "Boris A. Malomed", "Edilson L. Falcão-Filho", "Cid B. de Araújo" ]
physics.optics
[ "physics.optics", "nlin.PS" ]
label1]André C. A. Siqueira andrechaves.physics@gmail.com [label1] organization=Departamento de Física, Universidade Federal de Pernambuco, city=Recife, postcode=50670-901, state=Pernambuco, country=Brazil label2]Guillermo Palacios [label2]organization=Comissão Nacional de Energia Nuclear, Centro Regional de Ciências Nucleares do Nordeste – CNEN/CRCN-NE, city=Recife, postcode=50740-545, state=Pernambuco, country=Brazil label3]Albert S. Reyna [label3]organization=Programa de Pós-Graduação em Engenharia Física, Unidade Acadêmica do Cabo de Santo Agostinho, Universidade Federal Rural de Pernambuco, city=Cabo de Santo Agostinho, postcode=54518-430, state=Pernambuco, country=Brazil label4,label5]Boris A. Malomed [label4]organization=Department of Physical Electronics, School of Electrical Engineering, Faculty of Engineering, postcode=69978, state=Tel Aviv, country=Israel [label5]organization=Instituto de Alta Investigación, Universidad de Tarapacá, city=Casilla 7D, state=Arica, country=Chile label1]Edilson L. Falcão-Filho label1]Cid B. de Araújo We report results of systematic numerical analysis for multiple soliton generation by means of the recently reported multiple temporal compression (MTC) method, and compare its efficiency with conventional methods based on the use of photonic crystal fibers (PCFs) and fused silica waveguides (FSWs). The results show that the MTC method is more efficient to control the soliton fission, giving rise to a larger number of fundamental solitons with high powers, that remain nearly constant over long propagation distances. The high efficiency of the MTC method is demonstrated, in particular, in terms of multiple soliton collisions and the Newton’s-cradle phenomenology. Multiple Solitons Ultrashort pulses Supercontinuum generation Newton's cradle § INTRODUCTION In the course of the last decades, the investigation of temporal solitons has been widely developed for the study of various ultrafast optical phenomena, from the viewpoint of both fundamental research and applications <cit.>. Temporal solitons have been studied in the context of optical communications, where they may be employed as data bits <cit.>. The generation of self-confined solitary waves is commonly achieved by balancing the group-velocity dispersion (GVD) and self-phase modulation (SPM) in nonlinear (NL) media. For this purpose, the power of the incident beam must exceed the self-confinement critical level (P_crit), so that the nonlinearity allows the input pulse to maintain its temporal envelope during the propagation <cit.>. Thus, if the incident power is higher than P_crit, the input pulse evolves into a fundamental soliton with the remaining energy shed off as dispersive waves. If the incident power is much higher than P_crit, multiple fundamental solitons may be generated, under specific conditions <cit.>. One widely used method for generating temporal solitons is based on the dispersion and nonlinearity engineering in photonic crystal fibers (PCFs) <cit.>. This methodology has been useful for investigating the generation of temporal solitons in the near-infrared and visible ranges, due to the ability of PCFs to shift the anomalous dispersion region towards much lower wavelengths than those attainable with standard fibers. It is relevant to stress that, due to the high nonlinearity, low confinement loss and tunable chromatic dispersion that PCFs exhibit, they have been commonly used to investigate the evolution of fundamental and higher-order solitons. The first-order (fundamental) solitons are the only ones that strictly maintain a constant profile during the propagation, while higher-order solitons show a periodically varying shape. Under the action of perturbations, higher-order solitons split into several fundamental solitons, each centered on a different wavelength, in the process called soliton fission <cit.>. Although soliton fission works well in PCF, its efficiency in producing multiple-soliton trains is poor. This occurs because during the fission strong intrapulse Raman scattering (IRS) is induced as a result of significant temporal compression around the central region of the input pulse, caused by the generation of the first soliton exhibiting a very broad supercontinuum spectrum.The dissipative effect of the IRS limits the soliton propagation over long distances, and reduces the number of generated solitons, as a large portion of the energy remains confined in the first soliton. Recently, we reported a new methodology, called multiple temporal compression (MTC), to generate multiple temporal ultrashort solitons in a more efficient way <cit.>. The MTC method is based on a specific version of the general scheme of the nonlinearity management <cit.>, viz., the propagation of a laser pulse through a composite medium made up of alternating self-focusing and self-defocusing segments, both with normal group-velocity dispersion. During propagation in the first segment, the input pulse accumulates positive linear (ϕ_disp) and NL (ϕ_nl) phases due to the GVD and self-focusing effects, respectively. Consequently, the pulse is positively chirped. Subsequently, a partial compensation of ϕ_nl occurs, due to the self-defocusing effect, when the pulse propagates in the second segment. This compensation is not complete under to the accumulation of ϕ_disp that occurs in both segments due to the action of the normal dispersion. Thus, instead of the compensation of the positive chirp in the second segment in the course of the propagation, it generates pairs of temporal solitons that are temporally symmetric with respect to the pulse’s center. Furthermore, in the course of the longer propagation, more temporal solitons are generated from the leading and trailing edges, and eventually from the central region of the pulse. In this work, we numerically solve the generalized NL Schrödinger equation to compare the efficiency in generating multiple temporal solitons by using the MTC method <cit.> and previous methods based on the use of photonic crystal fibers (PCFs) and fused silica waveguides (FSWs). The efficiency of the methods is compared according to the number of temporal solitons formed from the same input conditions, as well as considering the energy distribution in the multiple-soliton train during the fission and the ability of each soliton to maintain its properties unchanged in the course of the subsequent propagation. § NUMERICAL ANALYSIS OF THE MULTIPLE TEMPORAL SOLITONS PROCESS The generation and propagation of temporal solitons is studied in the framework of the generalized NL Schrödinger equation (NLSE), written as <cit.>: ∂ A/∂ z - (∑_n⩾ 2β_ni^n+1/n!∂^n A/∂ T^n) = iγ_0(1 + i/ω_0∂/∂ T)( (1- f_R)A|A|^2 + + f_RA∫_0^∞h_R(τ)|A(z,T- τ)|^2dτ), where the term in the parentheses on the left-side of the Eq. (1) describes the high-order dispersion, while the right-side of the Eq. (1) describes the third-order NL optical effects with γ_0= ω_0n_2(ω_0)/cA_eff being the third-order NL coefficient. The temporal derivative in the first parenthesized factor on the right-hand side represents effects such as the self-steepening and optical shock formation. In the second factor, the fractional contribution of the delayed Raman response to the NL polarization is represented by f_R, with the first term representing the temporal SPM and instantaneous Raman contribution. The second term represents the convolution integral describing the delayed Raman response, which leads to effects such as the IRS and the soliton self-frequency shift. To perform the numerical solutions of Eq. (1), we use the fourth-order Runge-Kutta in the interaction picture (RK4IP) method, which was reported to be more accurate compared to other methods such as the conventional split-step Fourier method <cit.>. The simulations were started for input pulses with the hyperbolic secant shape, pulse duration time T_FWHM = 90 fs, and the input peak power varying from P_0 = 2.5 kW to 30 kW. For the purpose of the comparison the central pulse’s wavelength (λ_0) is chosen in such a way that the materials available for the realization of the MTC, PCF and FSW methods exhibit close absolute values of the GVDs with opposite signs. The NL parameters of the medium used in each procedure are described below. The generation and propagation of temporal solitons using the PCF, which implies the pulse propagation in the single medium (with length L =150 mm), was considered in the framework of the well-established model <cit.>. In this case, for the central pulse wavelength 850 nm, we take GVD coefficients in Eq. (1) as β_2 = -1.27·10^-2 ps^2m^-1 (the anomalous-dispersion regime), β_3 = 8.11·10^-5 ps^3m^-1, β_4 = -1.32·10^-7 ps^4m^-1, β_5 = 3.03 ·10^-10ps^5m^-1, β_6 = - 4.19·10^-13 ps^6m^-1, and β_7 = 2.57·10^-16 ps^7m^-1. The MTC method was also analyzed considering the same central pulse wavelength, λ_0 = 850 nm, but in the normal-dispersion regime, by using fused silica as the matrix for the two segments of the NL medium. By using pure fused silica in the first segment, a positive NL refractive index is obtained, with the GVD coefficients β_2 = + 3.22·10^-2ps^2m^-1, β_3 = 2.94·10^-5ps^3m^-1, β_4 = -1.67·10^-8ps^4m^-1, β_5 = 4.61·10^-11ps^5m^-1, β_6 = - 1.28·10^-13ps^6m^-1, and β_7 = 4.31·10^-16ps^7m^-1, which were calculated from the derivation of the Sellmeier expression <cit.>. On the other hand, the second segment of the NL medium includes metal nanoparticles in fused silica to achieve the self-defocusing behavior <cit.>, while maintaining the medium in the normal-dispersion regime. Another strategy may use birefringent NL crystals, such as LiNbO_3, BBO, and KTP, in the second segment of the NL medium, as negative second-order cascade effects are strong enough to compensate for excessive positive Kerr nonlinearity. In both cases, the second segment with an effective defocusing nonlinearity, n_2,eff < 0, and normal dispersion is used <cit.>. For completeness, we also consider the temporal soliton generation procedure that uses a 150 mm length FSW. In this case, it was not possible to obtain a normal-dispersion regime. However, for comparison with the MTC method, the central pulse wavelength of 1585 nm is chosen for use in the framework of the FSW method so that the absolute value of GVD is the same considered in both methods. Thus, from the derivation of the Sellmeier expression, for fused silica at 1585 nm, we obtain: β_2 = -3.22·10^-2ps^2m^-1, β_3 = 16.56·10^-5ps^3m^-1, β_4 = - 5.63·10^-7ps^4m^-1, β_5 = 2.73·10^-9ps^5m^-1, β_6 = - 1.61·10^-11ps^6m^-1, and β_7 = 1.11·10^-13ps^7m^-1 <cit.>. For correct comparison between the three methods, |γ_0|= 0.045 W^-1 m^-1 is adopted as the magnitude of the NL coefficient of all involved media, which corresponds to the reference value reported for a PCF <cit.>. In the PCF- and FSW-based methods, one has γ_0 > 0 and the NL medium length L = 150 mm. However, in the framework of the MTC method, we use the first segment of length 10 mm with the NL coefficient γ_0^(1) = |γ_0|, and the second segment of length 140 mm with γ_0^(2) = -|γ_0| . The Raman parameters used to model the influence of the IRS were obtained from Refs. <cit.>, as all the NL media considered in this work are based on fused silica. Thus, the fractional contribution of the delayed Raman response to the nonlinear polarization is taken as f_R = 0.18, and the Raman response function is h_R(τ) = τ_1 (τ_1^-2 + τ_2^-2) exp(-τ/τ_2)sin(τ/τ_1), where τ_1 = 12.2 fs and τ_2 = 32.0 fs. § RESULTS AND DISCUSSIONS Figure 1 shows the generation and propagation of solitons produced by the temporal evolution of the pulse intensity, as obtained from simulations of Eq. (1) adjusted to modeling the PCF and FSW methods. Figures 1(a) and 1(c), obtained for input peak powers 5 kW, show that both methods generate the first soliton after the incident pulse has traveled the distance ≈ 20 mm (≈ 25 mm for PCF, once we have |β_2^(PCF)|<|β_2^(FSW)| ). It is clearly seen that the first soliton carries a large part of the input beam power, with the slight excessive power carried by dispersive waves ejected around the soliton. For higher input powers, the energy remaining after the creation of the first soliton drives the generation of secondary solitons, i.e., the higher-order soliton breaks up into fundamental solitons through fission. For instance, Figs. 1(b) and 1(d) show that, respectively, five or three temporal solitons are generated by the PCF or FSW method, for input power 25 kW. Although the PCF method has been shown to be more efficient than the FSW, both demonstrate low peak powers of the secondary solitons, as well as power loss in the course of propagation under the action of the strong IRS. Figure 2 shows a pronounced trend for all solitons to lose their peak powers in the course of propagation over long distances. By analyzing the soliton propagation between 50 mm and 150 mm, it is observed that the peak power of the first soliton is reduced by ∼ 39 % (∼ 41 %), while the second soliton experiences the decrease in the peak power of ∼ 51 % (∼ 37 %), when generated from the input power of 25 kW, by using the PCF (FSW) method. Due to the mechanism by which multiple-soliton trains are generated, the first soliton tends to appear in the temporally isolated form, being subject to power loss via IRS due to the generation of optical phonons, while the central wavelength of the soliton features a red shift <cit.>. This loss mechanism tends to limit the propagation of robust solitons over long distances, as the NL effects is weakening. Secondary solitons are also influenced by the IRS. However, as the secondary solitons are generated in the region where the energy remaining after the creation of the first soliton is concentrated, these solitons are more likely to undergo collisions with each other, giving rise to energy transfer between them. Contrary to the results produced by the PCF and FSW methods, the solution of the generalized NLSE corresponding to the MTC method provides interesting results for the generation and propagation of temporal solitons over long distances. As shown in Fig. 3, the MTC method begins by generating a pair of solitons at edges of the pulse, labeled as the leading-edge soliton (S_LE) and one at the trailing edge (S_TE). Subsequently, new pairs of secondary solitons are generated closer to the central region of the pulse. As expected, the number of generated temporal solitons increases with the increase of the input peak power. However, remarkably, the secondary solitons show powers similar to that of the first soliton, indicating more efficient energy distribution during the soliton fission. While the MTC method is efficient in generating a large number of high-power temporal solitons, we focus here on the second (2_nd) and third (3_rd) S_LE and S_TE. A notable difference between the three methods studied here is the dispersion regime in which solitons are generated. In the MTC method, the normal-dispersion regime provides acceleration of each soliton, as IRS causes the red shift of its central wavelength. On the contrary, the anomalous-dispersion regime in the PCF and FSW causes deceleration of the soliton in the course of propagation. This difference is the reason why the solitons in Figs. 1 and 3 propagate in opposite temporal directions. Furthermore, the soliton acceleration induced by the MTC method creates a propitious configuration to observe phenomena ranging from soliton collisions to the Newton’s cradle (NC) <cit.>. The latter phenomenon can be clearly seen in Fig. 3(b), where the second S_TE collides with the first S_TE, and then collides again with the newly formed solitons in the central region of the pulse. Finally, as a result of the multiple inelastic collisions, the NC soliton (S_nc) is ejected with a higher peak power. A similar dynamical behavior, but with a larger number of collisions, is observed in the case of formation of the NC soliton at higher input powers, see Fig. 3(c). Interestingly, in Fig.3(c) we observe the NC effect in the direction opposite to that in Fig.3(b). The central region of the pulse generates a soliton which slows down and collides, consecutively, with the third, second, and first TE solitons, and then escapes. The dependence of the soliton peak power on the propagation distance, shown in Figs. 3(d,e), also shows advantages offered by the MTC method, in comparison to the PCF and FSW. In the case of the MTC method, the initial energy distribution is not only more efficient to generate multiple-soliton trains with high powers during the soliton fission, but it is also observed that each soliton has the ability to maintain its peak power during the propagation. This happens due to the cooperative process of energy exchange with dispersive waves generated during the soliton fission, or by collisions with other neighboring solitons, that allows the temporal solitons to travel longer distances compared to the case of the PCF and FSW. For example, Fig. 3(d) shows that the first S_TE starts its propagation with a lower peak power, compared to the pair of the first S_LE. However, as the first S_TE accelerates, the central region of the pulse transfers energy to it (see Fig. 3(a)), making its peak power comparable to that of the first S_LE. On the other hand, for higher input peak powers, the energy exchange through multiple collisions of the first S_TE with the secondary S_TE (Fig. 3(b)), or with the soliton NC (Fig. 3(b,c)) allows its peak power to increase or maintain its value over long propagation distances. These energy exchange and feedback processes are responsible for the fact that S_TE demonstrates higher power than S_LE for longer propagation distances, despite the fact that the initial energy distribution was opposite. In addition, the efficiency of the PCF, FSW and MTC methods can be compared by examining the number of solitons generated with the increase of the input peak power, as shown in Fig. 4. For the same input pulse, the MTC method shows the ability to generate a larger number of temporal solitons for all peak powers, while the PCF and FSW methods exhibit poorer results. Note that, for the input peak power higher than 15 kW, the number of solitons generated by the MTC method is almost double compared to the PCF and FSW. Thus, the MTC method emerges as a powerful tool to generate multiple temporal solitons, with high peak power, which is highly desired for long-propagation distances. However, the efficiency of the MTC declines considerably when the goal is the generation of multiple solitons in the visible regime, as the method seems to be ineffective for generating supercontinuum spectra that reach the visible range. As shown in Fig. 5, the PCF method, widely used for soliton generation in the visible regime <cit.>, provides a supercontinuum spectrum ranging from 480 nm to 1360 nm, while the MTC and FSW are limited to the spectral region from 720 nm to 2600 nm. Thus, the spectral region where the MTC method starts to work efficiently corresponds to the near-infrared, taking into regard that the IRS contribution shifts the central wavelength of the solitons up to 1200 nm. Another interesting feature of the spectral analysis shown in Fig. 5 (c) is that the observed frequency shift in the region between 1200 nm and 2600 nm is not due to the IRS. Such giant near-infrared emission (> 1200 nm) seems to be correlated with the accumulation of soliton collisions, similarly to Refs. <cit.>. Further studies are underway to better understand the present setup in this spectral regime. § CONCLUSIONS In summary, we demonstrate, through systematic numerical calculations of the generalized nonlinear Schrödinger equation, the high efficiency that the multiple temporal compression (MTC) method provides for the generation of multiple temporal solitons from a single optical pulse, in comparison to the well-known PCF and FSW methods. The numerical results show that the MTC method allows to generate twice as many solitons as the conventional ones, for the input peak powers studied here, with each soliton exhibiting a high power that remains approximately constant during the propagation. While the efficiency offered by the MTC technique decreases in the visible region, it appears as a useful tool for the studies of multiple soliton collisions in the near-infrared region, as well as for demonstrating the realization of the multi-soliton Newton’s cradle. § ACKNOWLEDGMENTS This work was supported by the Brazilian agencies Conselho Nacional de Desenvolvimento Científico e Tecnológico - CNPq (Grant: 431162/2018-2) and the National Institute of Photonics (INCT program) - Grant: 465.763/2014, Fundação de Amparo à Ciência e Tecnologia do Estado de Pernambuco (FACEPE), and Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES). The doctoral scholarship of A.C.A. Siqueira was provided by the CNPq. G. Palacios thanks a fellowship from CNPq Process No. 381191/2022-2. The work of B.A. Malomed was supported, in part, by the Israel Science Foundation through grant No. 1695/22. 00 sysoliatin2007solitonA.A. Sysoliatin, A.K. Senatorov, A.I. Konyukhov, L.A. Melnikov, V.A. Stasyuk, Soliton fission management by dispersion oscillating fiber, Opt. Express 15 (2007) 16302-16307. https://doi.org/10.1364/OE.15.016302. tai1988fissionK. Tai, A. Hasegawa, N. Bekki, Fission of optical solitons induced by stimulated Raman effect, Opt. Lett. 13 (1988) 392-394. https://doi.org/10.1364/OL.13.000392. driben2013newtonR. Driben, B.A. Malomed, A.V. Yulin, D.V. Skryabin, Newton's cradles in optics: From N-soliton fission to soliton chains, Phys. Rev. A 87 (2013) 063808. https://doi.org/10.1103/PhysRevA.87.063808. braud2016solitonizationF. Braud, M. Conforti, A. Cassez, A. Mussot, A. Kudlinski, Solitonization of a dispersive wave, Opt. Lett. 41 (2016) 1412-1415. https://doi.org/10.1364/OL.41.001412. gordon1986theoryJ.P. Gordon, Theory of the soliton self-frequency shift, Opt. Lett. 11 (1986) 662-664. https://doi.org/10.1364/OL.11.000662. mitschke1986discoveryF.M. Mitschke, L.F. Mollenauer, Discovery of the soliton self-frequency shift, Opt. Lett. 11 (1986) 659-661. https://doi.org/10.1364/OL.11.000659. xiang2011controllableY. Xiang, X. Dai, S. Wen, J. Guo, D. Fan, Controllable Raman soliton self-frequency shift in nonlinear metamaterials, Phys. Rev. A. 84 (2011) 033815. https://doi.org/10.1103/PhysRevA.84.033815. skryabin2003solitonD.V. Skryabin, F. Luan, J.C. Knight, P.St.J. Russell, Soliton self-frequency shift cancellation in photonic crystal fibers, Science 301 (2003) 1705-1708. https://doi.org/10.1126/science.1088516. knight1996allJ.C. Knight, T.A. Birks, P.St.J. Russell, D.M. Atkin, All-silica single-mode optical fiber with photonic crystal cladding, Opt. Lett. 21 (1996) 1547-1549. https://doi.org/10.1364/OL.21.001547. birks1997endlesslyT.A. Birks, J.C. Knight, P.St.J. Russell, Endlessly single-mode photonic crystal fiber, Opt. Lett. 22 (1997) 961-963. https://doi.org/10.1364/OL.22.000961. russell2003photonicP. Russell, Photonic crystal fibers, Science 299 (2003) 358-362. https://doi.org/10.1126/science.1079280. liu2012allL. Liu, Q. Tian, M. Liao, D. Zhao, G. Qin, Y. Ohishi, W. Qin, All-optical control of group velocity dispersion in tellurite photonic crystal fibers, Opt. Lett. 37 (2012) 5124-5126. https://doi.org/10.1364/OL.37.005124. dudley2002numericalJ.M. Dudley, S. Coen, Numerical simulations and coherence properties of supercontinuum generation in photonic crystal and tapered optical fibers, IEEE J. Sel. Top. Quantum Electron. 8 (2002) 651-659. https://doi.org/10.1109/JSTQE.2002.1016369. hult2007fourthJ. Hult, A fourth-order Runge–Kutta in the interaction picture method for simulating supercontinuum generation in optical fibers, J. Lightwave Technol. 25 (2007) 3770-3775. https://doi.org/10.1109/JLT.2007.909373. agrawal2013nonlinearG.P. Agrawal, Nonlinear Fiber Optics, fifth ed., Academic Press, Elsevier, 2013. dudley2006supercontinuumJ.M. Dudley, G. Genty, S. Coen, Supercontinuum generation in photonic crystal fiber, Rev. Mod. Phys. 78 (2006) 1135. https://doi.org/10.1103/RevModPhys.78.1135. hasegawa1973transmissionA. Hasegawa, F. Tappert, Transmission of stationary nonlinear optical pulses in dispersive dielectric fibers. I. Anomalous dispersion, Appl. Phys. Lett. 23 (1973) 142-144. https://doi.org/10.1063/1.1654836. mollenauer1988demonstrationL.F. Mollenauer, K. Smith, Demonstration of soliton transmission over more than 4000 km in fiber with loss periodically compensated by Raman gain, Opt. Lett. 13 (1988) 675-677. https://doi.org/10.1364/OL.13.000675. hasegawa1995solitonsA. Hasegawa, Y. Kodama, Solitons in optical communications, Oxford University Press on Demand, 1995. agrawal2012fiberG.P. Agrawal, Fiber-optic communication systems, fourth ed., John Wiley & Sons, 2012. Iannone1998nonlinearE. Iannone, F. Matera, A. Mecozzi, M. Settembre, Nonlinear Optical Communication Networks, Wiley, 1998. agrawal2001applicationsG.P. Agrawal, Applications of nonlinear fiber optics, Elsevier, 2001. mollenauer2006solitonsL.F. Mollenauer, J.P. Gordon, Solitons in optical fibers: fundamentals and applications, Elsevier, 2006. antikainen2012phaseA. Antikainen, M. Erkintalo, J.M. Dudley, G. Genty, On the phase-dependent manifestation of optical rogue waves, Nonlinearity 25 (2012) R73. https://doi.org/10.1088/0951-7715/25/7/R73. erkintalo2010giantM. Erkintalo, G. Genty, J.M. Dudley, Giant dispersive wave generation through soliton collision, Opt. Lett. 35 (2010) 658-660. https://doi.org/10.1364/OL.35.000658. husakou2001supercontinuumA.V. Husakou, J. Herrmann, Supercontinuum generation of higher-order solitons by fission in photonic crystal fibers, Phys. Rev. Lett. 87 (2001) 203901. https://doi.org/10.1103/PhysRevLett.87.203901. bose2016implicationsS. Bose, A. Sahoo, R. Chattopadhyay, S. Roy, S.K. Bhadra, G.P. Agrawal, Implications of a zero-nonlinearity wavelength in photonic crystal fibers doped with silver nanoparticles, Phys. Rev. A 94 (2016) 043835. https://doi.org/10.1103/PhysRevA.94.043835. arteaga2018solitonF.R. Arteaga-Sierra, A. Antikainen, G.P. Agrawal, Soliton dynamics in photonic-crystal fibers with frequency-dependent Kerr nonlinearity, Phys. Rev. A 98 (2018) 013830. https://doi.org/10.1103/PhysRevA.98.013830. bose2016studyS. Bose, R. Chattopadhyay, S. Roy, S. K. Bhadra, Study of nonlinear dynamics in silver-nanoparticle-doped photonic crystal fiber, J. Opt. Soc. Am. B 33 (2016) 1014-1021. https://doi.org/10.1364/JOSAB.33.001014. bose2018dispersiveS. Bose, R. Chattopadhyay, S.K. Bhadra, Dispersive shock mediated resonant radiations in defocused nonlinear medium, Opt. Commun. 412 (2018) 226-229. https://doi.org/10.1016/j.optcom.2017.12.016. driben2010solitaryR. Driben, J. Herrmann, Solitary pulse propagation and soliton-induced supercontinuum generation in silica glasses containing silver nanoparticles, Opt. Lett. 35 (2010) 2529-2531. https://doi.org/10.1364/OL.35.002529. zhao2022effectsS. Zhao, R. Guo, Y. Zeng, Effects of frequency-dependent Kerr nonlinearity on higher-order soliton evolution in a photonic crystal fiber with one zero-dispersion wavelength, Phys. Rev. A 106 (2022) 033516. https://doi.org/10.1103/PhysRevA.106.033516. skryabin2010colloquiumSkryabin, D. & Gorbach, A. Colloquium: Looking at a soliton through the prism of optical supercontinuum. Rev. Mod. Phys. 82 (2010) 1287. https://doi.org/10.1103/RevModPhys.82.1287. siqueira2023generationA.C.A. Siqueira, E.L Falcão-Filho, B.A Malomed, & C.B de Araújo. Generation of multiple ultrashort temporal solitons in a third-order nonlinear composite medium with self-focusing and self-defocusing nonlinearities. Physical Review A. 107, 063519 (2023). https://doi.org/10.1103/PhysRevA.107.063519 siqueira2022generationA.C.A. Siqueira, B.A. Malomed, C.B. de Araújo, E.L. Falcão-Filho, Generation of multiple ultrashort solitons in heterogeneous medium with self-focusing and defocusing nonlinearities, Latin America Optics And Photonics Conference, pp. M4A-3 (2022). https://doi.org/10.1364/LAOP.2022.M4A.3. malomed2006solitonB.A. Malomed, Soliton management in periodic systems, Springer, 2006. https://doi.org/10.1007/0-387-29334-5 malitson1965interspecimenI.H. Malitson, Interspecimen comparison of the refractive index of fused silica, J. Opt. Soc. Am. 55 (1965) 1205-1209. https://doi.org/10.1364/JOSA.55.001205. zhang2017nonlinearY. Zhang, Y. Wang, Nonlinear optical properties of metal nanoparticles: a review, RSC Adv. 7 (2017) 45129-45144. https://doi.org/10.1039/C7RA07551K. reyna2017highA.S. Reyna, C.B. de Araújo, High-order optical nonlinearities in plasmonic nanocomposites—a review, Adv. Opt. Phot. 9 (2017) 720-774. https://doi.org/10.1364/AOP.9.000720. kassab2018metalL.R.P. Kassab, C.B. de Araújo, Metal nanostructures for photonics, Elsevier, 2018. reyna2022beyondA.S. Reyna, C.B. de Araújo, Beyond third-order optical nonlinearities in liquid suspensions of metal-nanoparticles and metal-nanoclusters. J. Opt. 24 (2022) 104006. https://doi.org/10.1088/2040-8986/ac8b94. desalvo1992selfR. DeSalvo, D.J. Hagan, M. Sheik-Bahae, G. Stegeman, E.W.V. Stryland, H. Vanherzeele, Self-focusing and self-defocusing by cascaded second-order effects in KTP, Opt. Lett. 17 (1992) 28-30. https://doi.org/10.1364/OL.17.000028. ashihara2002solitonS. Ashihara, J. Nishina, T. Shimura, K. Kuroda, Soliton compression of femtosecond pulses in quadratic media, J. Opt. Soc. Am. B 19 (2002) 2505-2510. https://doi.org/10.1364/JOSAB.19.002505. bache2008limitsM. Bache, O. Bang, W. Krolikowski, J. Moses, F.W. Wise, Limits to compression with cascaded quadratic soliton compressors. Opt. Express 16 (2008) 3273-3287. https://doi.org/10.1364/OE.16.003273. guo2014fewH. Guo, X. Zeng, B. Zhou, M. Bache, Few-cycle solitons and supercontinuum generation with cascaded quadratic nonlinearities in unpoled lithium niobate ridge waveguides, Opt. Lett. 39 (2014) 1105-1108. https://doi.org/10.1364/OL.39.001105. vsuminas2017secondR. Šuminas, G. Tamošauskas, V. Jukna, A. Couairon, A. Dubietis, Second-order cascading-assisted filamentation and controllable supercontinuum generation in birefringent crystals, Opt. Express 25 (2017) 6746-6756. https://doi.org/10.1364/OE.25.006746. conforti2013extremeM. Conforti, F. Baronio, Extreme high-intensity and ultrabroadband interactions in anisotropic β-BaB_2O_4 crystals, J. Opt. Soc. Am. B 30 (2013) 1041-1047. https://doi.org/10.1364/JOSAB.30.001041. erkintalo2010experimentalM. Erkintalo, G. Genty, J.M. Dudley, Experimental signatures of dispersive waves emitted during soliton collisions, Opt. Express 18 (2010) 13379-13384. https://doi.org/10.1364/OE.18.013379. deng2016trappingZ. Deng, X. Fu, J. Liu, C. Zhao, S. Wen, Trapping and controlling the dispersive wave within a solitonic well, Opt. Express 24 (2016) 10302-10312. https://doi.org/10.1364/OE.24.010302
http://arxiv.org/abs/2307.00349v2
20230701142344
Unbalanced Growth, Elasticity of Substitution, and Land Overvaluation
[ "Tomohiro Hirano", "Alexis Akira Toda" ]
econ.TH
[ "econ.TH" ]
Bulk-Boundary Correspondence in Two-Dimensional Non-Hermitian Systems: Topological Winding Tuple Characterizes Boundary Accumulation of Magnons Tao Yu August 1, 2023 =============================================================================================================================================== We study the long-run behavior of land prices when land plays the dual role of factor of production and store of value. In modern economies where technological progress is faster in non-land sectors, when the elasticity of substitution in production exceeds 1 at high input levels (which always holds if non-land factors do not fully depreciate), unbalanced growth occurs and land becomes overvalued on the long-run trend relative to the fundamental value defined by the present value of land rents. Around the trend, land prices exhibit recurrent stochastic fluctuations, with expansions and contractions in the size of land overvaluation. Keywords: asset price, elasticity of substitution, land, unbalanced growth. JEL codes: D53, G12, O41. § INTRODUCTION As economies develop and per capita incomes rise, the importance of land as a factor of production diminishes.[<cit.> documents that the GDP share of agriculture is lower in countries with higher per capita incomes. She also notes that the employment share of agriculture decreases with incomes, both across countries and time. <cit.> documents that the employment share of agriculture in U.S. has declined from about 80% to below 5% over the past 200 years.] This is partly because people face biological constraints regarding the amount of food they can consume (where land produces agricultural products) or the amount of leisure time they can spend (where land produces amenities like tennis courts and national parks). Although people living in modern capitalistic societies have tremendously benefited from technological progress over the past decades such as the development of computers, Internet, smartphones, and electric vehicles, introspection suggests that our dining and outdoor experiences—the quality of “land-intensive products”—have not changed much. On the other hand, land also has an important role as a scarce means of savings and has significant value as a financial asset.[According to <cit.>, among 29 OECD countries, real estate (owner-occupied housing and secondary real estate) comprises more than 50% of household wealth in 27 countries. See also <https://www.oecd.org/housing/policy-toolkit/data-dashboard/wealth-distribution/>.] Land has a few characteristics that make it suitable as a store of value compared to other means of savings such as gold or cryptocurrency. First, unlike cryptocurrency, land has an intrinsic value because it can be used as a factor of production in agriculture, construction, housing, and leisure. Second, unlike gold (which is chemically homogeneous), each land parcel is immobile and unique and hence property rights are well-defined, which makes it difficult to steal. Third, relative to durable goods such as vehicles, land is more durable as it cannot be destroyed absent natural disasters, sea level rise, and pollution. This paper theoretically studies the long-run behavior of land prices in modern economies where the importance of land as a factor of production diminishes, yet, land remains to play a significant role as a store of value. In a plausible economic model with land and aggregate risk, we establish a theorem—Land Overvaluation Theorem—showing the tight link between unbalanced productivity growth, elasticity of substitution between production factors, and overvaluation of land, meaning that the equilibrium land price exceeds its fundamental value defined by the present value of land rents. To illustrate the key mechanism of how land overvaluation emerges on the long-run trend, we first present two example models. Throughout this paper, we employ a standard two-period overlapping generations (OLG) model with land, where land plays the dual role of factor of production and store of value. The two-period OLG model is the simplest model with heterogeneous agents—there are just the young and the old. It illustrates speculative behaviour of heterogeneous agents trading assets with each other—individuals buying an asset largely on the basis of beliefs (here assumed to be rational) of what they can sell it for. The basis of heterogeneity is that the young buy land from the old, in anticipation of receiving rents and selling land to the next generation before exiting the market. In the first example, we consider a two-sector growth economy where one (“tech”) sector uses skilled labor (human capital) as the primary input for production such as technology, finance, and information and communication, while the other (“land” sector) uses unskilled labor and land as the primary inputs such as agriculture and construction. Specifically, the production function is linear in the tech sector and Cobb-Douglas in the land sector. There may or may not be labor mobility between different sectors and we examine both cases. Importantly and realistically, productivity growth rates are different across the two sectors. Unlike standard models with balanced growth, we suppose that the tech sector will eventually grow faster, exhibiting unbalanced growth. In this setting, land prices will grow, pulled by the savings motive of the young and the productivity growth in the tech sector. On the other hand, land rents will not grow as fast because productivity growth is lower in the land sector. This implies that the land price will eventually exceed the present value of land rents (its fundamental value), generating land overvaluation, and a backward induction argument shows that land is always overvalued. The second example is a one-sector growth economy. To illustrate the role of the elasticity of substitution, we employ a standard constant elasticity of substitution (CES) production function where labor and land are used as inputs, with factor-augmenting technological progress. As in the first example, we assume that labor productivity grows faster than land productivity, which generates land overvaluation. The intuition is as follows. Along the equilibrium path, the land price increases together with wages, whose growth rate will be the same as labor productivity growth. On the other hand, the growth rate of land rents will be suppressed if the elasticity of substitution between land and labor exceeds 1, in which case the price-rent ratio will rise and land will be overvalued. To be precise, the ratio increases over time depending on whether the elasticity of substitution exceeds 1 at high input levels, not necessarily globally. Motivated by these examples, we consider an abstract stochastic overlapping generations model with land and establish the Land Overvaluation Theorem. We identify economic conditions under which land overvaluation will necessarily emerge in equilibrium. Let us denote the labor and land productivities at time t by A_Ht and A_Xt, respectively. Let us also denote by σ (a lower bound of) the elasticity of substitution between land and labor at sufficiently high input levels and assume σ>1. The main result of this paper, Theorem <ref>, shows that if _0∑_t=0^∞ (A_Ht/A_Xt)^1/σ-1<∞, then land is overvalued in equilibrium. Noting that σ>1 and hence 1/σ-1<0, this land overvaluation condition holds whenever labor productivity A_Ht grows faster than land productivity A_Xt in the long run, , unbalanced growth occurs. The intuition is the same as that for the two examples just described, which are special cases of this theorem. In Section <ref>, we justify our assumption of σ>1 in several ways based on both empirical and theoretical grounds. To the best of our knowledge, this theorem is the first that proves land overvaluation in an economy with aggregate uncertainty. There are three important implications to be drawn from our Land Overvaluation Theorem. First, our analysis illustrates the key mechanism of how land overvaluation emerges, where unbalanced growth and elasticity of substitution play a crucial role. To our knowledge, the link between the elasticity of substitution and asset overvaluation is new.[The importance of the intertemporal elasticity of substitution in macro-finance models is well known <cit.>. The analogy here is only superficial because * the relevant elasticity of substitution in our model is between production factors in the production function, not between consumption in different periods in the utility function, and * macro-finance models typically assume outright that the asset price equals its fundamental value. ] Second, unlike the usual perspective on land overvaluation (sometimes called land bubbles) as short-run phenomena with boom-bust cycles, our analysis shows land overvaluation on the long-run trend in the process of economic development. In reality, as economies develop, structural transformation occurs from the land-intensive agricultural economy to the labor- or knowledge-intensive economy. During this transition, while the importance of land as a factor of production would diminish, as long as land remains important as a store of value, land necessarily becomes overvalued. Third, our theorem in an economy with aggregate risk also provides a new insight on short-term fluctuations that deviate from the long-run trend. When productivities of the economy swing up and down, the land price also fluctuates. In standard asset pricing models, these valuations and fluctuations always reflect fundamentals. In contrast, our analysis shows that land is always overvalued, associated with expansions and contractions in the size of overvaluation that may appear to be the emergence and collapse of large land bubbles. Our model provides a theoretical foundation for recurrent stochastic bubbles. §.§ Related literature As in <cit.>, <cit.>, <cit.>, and <cit.>, we employ a standard two-period OLG model with land where land plays the dual role of factor of production and store of value. Our paper is different because we focus on asset pricing, unbalanced growth, and land overvaluation. As in <cit.> and the large subsequent literature, we study asset pricing in an economy with aggregate uncertainty. In this literature, it is well known that there is a fundamental difficulty in generating asset overvaluation (sometimes called asset bubbles) in dividend-paying assets including the <cit.> tree model, even if dividends are slightly positive.[See, for instance, <cit.> for details.] Since land in our paper is used as a factor of production yielding positive rents, land may be interpreted as a variant of the Lucas tree. In this sense, our paper identifies conditions under which the tree is necessarily overvalued as the unique equilibrium outcome.[Of course, it is well known since <cit.> and <cit.> that for zero-dividend assets like fiat money, overvaluation may occur, in which case there usually exist a continuum of monetary equilibria. Our paper is about asset overvaluation in models of a Lucas tree type asset with positive dividends, not about pure bubbles with no dividends, which are fundamentally different.] Perhaps because of this difficulty, progress in macro-finance models that describe realistic asset overvaluation in stocks, land, and housing has been slow. Our paper contributes towards this direction. Concerning unbalanced growth, <cit.> points out the implications for economic development when different sectors have different productivity growth rates. <cit.> consider a two-sector OLG model with uneven productivity growth rates across the capital-intensive (Solow) sector and the land-intensive (Malthus) sector and argue that land becomes unimportant as a factor of production as the economy develops. <cit.> show in a two-sector general equilibrium model that differences in factor proportions across different sectors combined with capital deepening leads to unbalanced growth. The elasticity of substitution between the two sectors play a key role for growth dynamics. <cit.>, <cit.>, <cit.>, and <cit.> use non-homothetic preferences to generate unbalanced growth. A crucial difference between our work and this literature is that we show the tight theoretical link between unbalanced growth, elasticity of substitution, and land overvaluation, while the literature abstracts from asset pricing. Concerning land overvaluation, several papers such as <cit.> argue that land or housing can be overvalued under incomplete markets and financial innovation because they serve as collateral that can be seized upon default by borrowers. Our model is different because markets are complete and frictionless. In the model of <cit.>, which builds on <cit.>, land is intrinsically useless but may have a positive value (hence be overvalued) because it is a scarce means of savings under a low interest rate environment. However, our model has substantial differences. First, in <cit.>, land is intrinsically useless, which leads to equilibrium indeterminacy including equilibria in which land is worthless. In contrast, in our model, land is a productive asset used as an input and hence necessarily has a positive price, with the size of overvaluation fluctuating with productivities in the unique equilibrium. Second, and more importantly, we highlight the importance of unbalanced growth and elasticity of substitution for generating land overvaluation, whose relevance is obscured in <cit.> because land rents are zero and he focuses on the steady state. § AN EXAMPLE TWO-SECTOR OLG MODEL To clearly present the tight connection between unbalanced growth and land overvaluation, we start the discussion with a simple two-sector overlapping generations (OLG) model that admits a unique equilibrium in closed-form. §.§ Model Time is discrete, runs forever, and is indexed by t=0,1,…. Preferences At each time t, a constant mass H of agents are born, who live for two periods and derive utility (1-β)log y_t+βlog z_t+1 from consumption (y_t,z_t+1) when young and old. Each period, the young are endowed with one unit of labor, while the old are not. Masses H_1 and H_2=H-H_1 of agents are skilled and unskilled. At t=0, there is a mass H of initial old agents who only care about their consumption z_0. The initial old is endowed with a unit supply of land, which is durable and non-reproducible. Technologies There are two production sectors denoted by j=1,2. Sector 1 is a knowledge-intensive industry such that skilled labor is the primary input for production, such as technology, finance, and information and communication. Sector 2 is a land-intensive industry such that both unskilled labor and land are inputs for production, such as agriculture and construction. The time t production function of sector j is F_jt(H,X), where H,X denote the labor and land inputs. For simplicity, we suppose that technologies in sectors 1 and 2 are linear and Cobb-Douglas, respectively: F_1t(H,X) =A_1tH, F_2t(H,X) =A_2tH^α X^1-α, where A_jt>0 denotes the total factor productivity in sector j at time t and α∈ (0,1) is the labor share of sector 2. Equilibrium As usual, the competitive equilibrium is characterized by utility maximization, profit maximization, and market clearing. Without loss of generality, assume that each sector has one representative firm. Firm j chooses labor and land inputs H,X to maximize the profit F_jt(H,X)-w_jtH-r_tX, where w_jt denotes the wage in sector j. Note that because the two sectors employ different types of labor (skilled and unskilled), the wages differ. Noting that labor supply is exogenous at (H_1,H_2) and land supply is 1, using the functional form of the production functions (<ref>), profit maximization implies the wages and rent w_1t =A_1t, w_2t =α A_2tH_2^α-1, r_t =(1-α)A_2tH_2^α. Define the aggregate labor income by w_t w_1tH_1+w_2tH_2=A_1tH_1+α A_2tH_2^α. Because agents have identical homothetic preferences, demand aggregation holds and the aggregate consumption of the young and the old (y_t,z_t+1) and land holdings x_t maximize utility (<ref>) subject to the budget constraints Young: y_t+P_tx_t =w_t, Old: z_t+1 =(P_t+1+r_t+1)x_t, where P_t is the land price and we choose the consumption good as the numéraire. Clearly, the two budget constraints (<ref>) can be combined into one as y_t+1/R_tz_t+1=w_t, where R_t (P_t+1+r_t+1)/P_t denotes the gross risk-free rate between time t and t+1. Applying the familiar Cobb-Douglas formula to the combined budget constraint (<ref>), the consumption of the young is y_t=(1-β)w_t. Using the budget constraint of the young (<ref>) and the land market clearing condition x_t=1 (the old exit the economy so the young must hold the entire land), we obtain the land price P_t=P_tx_t=β w_t=β (A_1tH_1+α A_2tH_2^α). Therefore we obtain the following proposition. There exists a unique equilibrium, which is characterized by (<ref>), (<ref>), (<ref>), and y_t=(1-β)w_t. §.§ Unbalanced growth and land overvaluation We now study conditions under which land is overvalued. For simplicity, suppose that productivity growth is exponential, so A_jt=G_j^t for some G_j>0. Using (<ref>) and (<ref>), both the land price and rent grow exponentially: P_t =β (G_1^tH_1+α G_2^tH_2^α), r_t =(1-α)G_2^tH_2^α. Therefore if G_1>G_2, then the land price grows at a faster rate than the rent and the dividend yield decreases, which suggests that land is overvalued. We make this argument more formal. Define the gross risk-free rate between time t-1 and t by the return on land R_t-1=P_t+r_t/P_t-1=β G_1^tH_1+(βα+1-α)G_2^tH_2^α/β G_1^t-1H_1+βα G_2^t-1H_2^α. Define the date-0 price of consumption delivered at time t (the price of a zero-coupon bond with face value 1 and maturity t) by q_t=1/∏_s=0^t-1R_s, with the normalization q_0=1. The fundamental value of land at time t is defined by the present value of rents V_t1/q_t∑_s=1^∞ q_t+sr_t+s. We say that land is overvalued if P_t>V_t. We obtain the following proposition. Land is overvalued (P_t>V_t) if G_1>G_2. See Corollary <ref> below. The intuition for Proposition <ref> is as follows. In this economy, land serves as a store of value as well as a factor of production. Because agents have labor income only when young, there is a strong savings motive for retirement, which pushes up the land price and hence P_t∼ G_1^t. When G_1>G_2, the gross risk-free rate R_t-1 in (<ref>) converges to G_1 as t→∞. Consequently, we have the order of magnitude q_t+s/q_t∼ G_1^-s and r_t+s∼ G_2^t+s, so a straightforward calculation yields V_t∼ G_2^t. Therefore P_t>V_t for large enough t, and a backward induction argument shows P_t>V_t for all t. §.§ Two variants The previous example is arguably highly stylized as labor supply in both sectors is exogenous and one of the production functions is linear. This section presents two other variants that give rise to land overvaluation. This example is a simplified version of the model of <cit.>, where we abstract from capital. The production functions are the same as in (<ref>), but labor is homogeneous (with aggregate supply normalized to 1) and mobile between the two sectors. Letting w_t be the (common) wage and H_jt be the labor demand in sector j, profit maximization implies A_1t=w_t=α A_2tH_2t^α-1 H_2t=(α A_2t/A_1t)^1/1-α, where we assume A_1t>α A_2t to guarantee an interior solution. Using (<ref>), the land rent is r_t=(1-α)α^α/1-α(A_2t/A_1t^α)^1/1-α. Therefore if A_jt=G_j^t, then the dividend yield on land is r_t/P_t=(1-α)α^α/1-α/β(G_2/G_1)^t/1-α, which geometrically decays to 0 if G_1>G_2. By Corollary <ref> below, land is overvalued. There is only one sector with a constant elasticity of substitution (CES) aggregate production function F_t(H,X)=(α (A_HtH)^1-ρ+(1-α)(A_XtX)^1-ρ)^1/1-ρ, where α∈ (0,1) is a parameter, σ=1/ρ is the elasticity of substitution between labor and land, and (A_Ht,A_Xt) are factor-augmenting productivities. Without loss of generality, normalize the labor and land supply as (H,X)=(1,1). Then a straightforward calculation yields the rent-wage ratio r_t/w_t=1-α/α(A_Xt/A_Ht)^1-ρ. As before, the land price satisfies P_t=β w_t, so the dividend yield on land is r_t/P_t=1-α/βα(A_Xt/A_Ht)^1-1/σ=1-α/βα(G_X/G_H)^(1-1/σ)t, where the last equality assumes (A_Ht,A_Xt)=(G_H^t,G_X^t). Therefore if the elasticity of substitution σ between land and labor exceeds 1 and G_H>G_X, then the dividend yield geometrically decays to 0. By Corollary <ref> below, land is overvalued. In what follows, following <cit.>, we refer to a situation with uneven productivity growth between different sectors or different production factors as “unbalanced growth”. When the economy features multiple sectors as in reality, there is no reason to expect equal growth rates across sectors. The slightest introspection suggests that it would be a miracle if the rate of technological progress were the same in 19th century trains and (horse-drawn) carriages, 20th century computers and calculators, or early 21st century electric vehicle batteries and internal combustion engines. Although models with unbalanced growth are not so common in economics, unbalanced growth is a natural and general feature in the process of economic development. § SUBSTITUTION ELASTICITY AND LAND OVERVALUATION The examples in Section <ref> suggest that unbalanced growth and land overvaluation may be closely related. This section confirms this conjecture in a general setting and highlights the role of the elasticity of substitution between land and other production factors. §.§ Model We consider a stochastic two-period overlapping generations model. Uncertainty is resolved according to a filtration ℱ_t_t=0^∞ over a probability space (Ω,ℱ,P). We denote conditional expectations by _t[·]=[·|ℱ_t]. Preferences Agents born at time t have the Cobb-Douglas utility (1-β)log y_t+β_t[log z_t+1], where β∈ (0,1). There are two factors of production, labor and land (denoted by H,X), both of which are in unit supply. As in Section <ref>, only the young are endowed with labor, and the initial old is endowed with land, which is durable and non-reproducible. Technologies Without loss of generality, we only specify the aggregate production function, as it is well known that if each sector or firm is competitive and markets are frictionless, profit maximization at the individual and aggregate level are equivalent. (See Corollary <ref>.) Below, we say that a production function F(H,X) is neoclassical if F:_++^2→_++ is homogeneous of degree 1, concave, continuously differentiable, and satisfies F_H,F_X>0. The time t aggregate production function takes the form F_t(H,X)=F(A_HtH,A_XtX), where F is a neoclassical production function and A_Ht,A_Xt>0 are ℱ_t-measurable factor-augmenting productivities. Equilibrium The definition of a competitive equilibrium is standard. A competitive equilibrium consists of adapted processes of prices (P_t,r_t,w_t)_t=0^∞, allocations (x_t,y_t,z_t)_t=0^∞, and factor inputs (H_t,X_t)_t=0^∞ such that, * (Utility maximization) (x_t,y_t,z_t+1) maximizes utility (<ref>) subject to the budget constraints (<ref>), * (Profit maximization) (H_t,X_t) maximizes the profit F_t(H_t,X_t)-w_tH_t-r_tX_t, * (Market clearing) H_t=1, X_t=1=x_t, and y_t+z_t=F_t(H_t,X_t). Note that the market clearing condition x_t=1 follows because the old exit the economy and the young must buy the entire land. Due to log utility, the existence and uniqueness of equilibrium are immediate. If Assumption <ref> holds, then the economy has a unique equilibrium, which is characterized by the following equations. Wage: w_t =F_H(A_Ht,A_Xt)A_Ht, Rent: r_t =F_X(A_Ht,A_Xt)A_Xt, Land price: P_t =β w_t, Young consumption: y_t =(1-β)w_t, Old consumption: z_t =β w_t+r_t. §.§ Elasticity of substitution As Example <ref> suggests, the elasticity of substitution plays a crucial role in generating land overvaluation. Recall that the elasticity of substitution σ between production factors is defined by the percentage change in relative factor inputs with respect to the percentage change in relative factor prices σ=-∂log (H/X)/∂log (w/r), where the derivative is taken along the production possibility frontier F(H,X)=constant. A mathematically more convenient way to define the elasticity of substitution is the following. Let h=log(H/X) be the log relative inputs. Then noting that w=F_H and r=F_X, (<ref>) can be rewritten as ρ(H,X)1/σ(H,X)=-∂log (F_H/F_X)/∂ h, where we set (H,X)=(X^h,X) to compute the derivative and substitute h=log(H/X). The following lemma provides an explicit formula for the elasticity of substitution of a neoclassical production function. Let F be a neoclassical production function. Then its elasticity of substitution σ_F(H,X) satisfies σ_F=F_HF_X/FF_HX. To derive asset pricing implications, we restrict the elasticity of substitution as follows. The elasticity of substitution of the neoclassical production function F exceeds 1 at high input levels: lim inf_H→∞σ_F(H,1)>σ>1. We justify the economic relevance of Assumption <ref> in several ways. The first justification is empirical. <cit.> find that the elasticity of substitution between land and non-land factors for producing housing service is 1.16 for residential properties and 1.39 for commercial properties in Allegheny County, Pennsylvania. <cit.> argue that the estimation approach of <cit.> is less susceptible to measurement error than the old estimates, which are likely biased downwards. They find that the elasticity of substitution is around 1.25 for Chicago and Berlin. The second justification is the pathological behavior of interest rates with σ<1. To see this, suppose σ<1 in Example <ref>. Using (<ref>) and (<ref>), we can bound the gross risk-free rate from below as R_t-1 =β w_t+r_t/β w_t-1≥r_t/β w_t-1 =1-α/αβ(α G_H^(1-ρ)t+(1-α)G_X^(1-ρ)t/α G_H^(1-ρ)(t-1)+(1-α)G_X^(1-ρ)(t-1))^ρ/1-ρG_X^(1-ρ)t/G_H^(1-ρ)(t-1) =1-α/αβ(α (G_H/G_X)^(1-ρ)t+1-α/α (G_H/G_X)^(1-ρ)(t-1)+1-α)^ρ/1-ρG_H^1-ρG_X^ρ(G_H/G_X)^(ρ-1)t, which tends to ∞ as t→∞ because G_H>G_X and ρ>1. An interest rate diverging to infinity is counterfactual and pathological. The third justification is that when the marginal product of labor is bounded away from zero, the elasticity of substitution necessarily exceeds 1 at high input levels, as the following lemma shows. Let F be a neoclassical production function with lim_H→∞F_H(H,1)=m>0. Then lim inf_H→∞σ_F(H,1)≥ 1. To illustrate Lemma <ref>, for parameters A,B>0, α∈ (0,1), and ρ>0, consider the neoclassical production function F(H,X)=A(α H^1-ρ + (1-α) X^1-ρ)^1/1-ρ+BH. This functional form is common in applied works. For instance, H could be capital and B=1-δ could be the fraction remaining after depreciation. Alternatively, (<ref>) can be thought of as a generalization of the main model in Section <ref> by identifying A as A_2H_2^α and B as A_1H_1. To simplify notation, let Y=(α H^1-ρ + (1-α) X^1-ρ)^1/1-ρ. Then F =AY+BH, F_H =Aα Y^ρ H^-ρ+B, F_X =A(1-α) Y^ρ X^-ρ, F_HX =ρ Aα(1-α)Y^2ρ-1H^-ρX^-ρ. Applying Lemma <ref>, the elasticity of substitution becomes σ=F_HF_X/FF_HX =(Aα Y^ρ H^-ρ+B)(A(1-α) Y^ρ X^-ρ)/(AY+BH)ρ Aα(1-α)Y^2ρ-1H^-ρX^-ρ =1/ρ1+B/Aα(H/Y)^ρ/1+B/A(H/Y). Clearly, as H→∞ we have Y/H=(α+(1-α)(X/H)^1-ρ)^1/1-ρ→α^1/1-ρ if ρ<1, 0 if ρ≥ 1, where the case ρ=1 follows because Y=H^α X^1-α. Therefore σ(H,1)→ 1/ρ if ρ<1, 1/α if ρ=1, ∞ if ρ>1 as H→∞. In all cases, we have lim inf_H→∞σ(H,1)>1, satisfying Assumption <ref>. §.§ Unbalanced growth and land overvaluation We now extend the land overvaluation result in Proposition <ref> to a general setting. The following theorem is the main result of this paper. Suppose Assumptions <ref>, <ref> hold and _0∑_t=0^∞ (A_Ht/A_Xt)^1/σ-1<∞ almost surely. Then land is overvalued in equilibrium. The proof of Theorem <ref> is deferred to the appendix. The condition (<ref>) can be understood as follows. Suppose for simplicity that A_Ht=G_H^t and A_Xt=G_X^t, so productivity growth is exponential. Then the t-th term in the sum (<ref>) is (G_H/G_X)^(1/σ-1)t, which is summable if σ>1 and G_H>G_X. Thus condition (<ref>) roughly says that labor productivity growth is higher than land productivity growth in the long run. The intuition for Theorem <ref> is similar to the one noted in the introduction, so we do not repeat it. It is important to note that since the equilibrium is unique by Proposition <ref>, under the conditions in Theorem <ref>, there are no equilibria in which the land price equals its fundamental value. Theorem <ref> has three important implications. First, it clarifies the role of unbalanced growth and elasticity of substitution for generating land overvaluation, which was previously overlooked. Regarding the assumption of elasticity of substitution between land and non-land exceeding 1, we justify it on empirical and theoretical grounds as discussed in Section <ref>. Second, we can derive a new insight on the long-run behavior of land prices in a modern economy. The conventional view is that on the long-run trend, the land price should reflect its fundamental value, even if it may deviate from the fundamental value temporarily. In sharp contrast with this widely-held view, Theorem <ref> implies that during the process of economic development characterized by unbalanced productivity growth, land overvaluation will naturally and necessarily arise.[Of course, if G_H=G_X holds in the long run, which is often assumed to ensure a balanced growth path in the growth literature, there will be no overvaluation in land prices but this is obviously a knife-edge case.] Before discussing the third implication in Section <ref>, we show that all examples in Section <ref> are special cases of Theorem <ref>. Proposition <ref> is true. Define the aggregate production function by F_t(H,X)=A_1tH_1H+A_2t(H_2H)^α X^1-α, where H_1,H_2>0 are constants. Define F(H,X) =H_2H+H_2^α H^α X^1-α, (A_Ht,A_Xt) =(A_1t,(A_2t/A_1t^α)^1/1-α). Then clearly F_t(H,X)=F(A_HtH,A_XtX) and Assumption <ref> holds. Assumption <ref> holds by Example <ref>. If A_jt=G_j^t with G_1>G_2, then A_Ht/A_Xt=(A_1t/A_2t)^1/1-α=(G_1/G_2)^t/1-α, so (<ref>) holds. Land is overvalued in Example <ref> if G_1>G_2. As is well known, profit maximization at the individual sector or firm level is equivalent to that at the aggregate level. Consider the aggregation of the two production functions in (<ref>). Suppressing the t subscript and setting (X_1,X_2)=(0,X), the Lagrangian for the maximization problem F(H,X)max∑_j=1^2 F_j(H_j,X_j):∑_j=1^2 H_j=H,∑_j=1^2 X_j=X is ℒ(H_1,H_2,λ)=A_1H_1+A_2 H_2^α X^1-α+λ(H-H_1-H_2), where λ is the Lagrange multiplier. Applying the Karush-Kuhn-Tucker theorem, we obtain λ=A_1, H_2=(α A_2/A_1)^1/1-αX, and the aggregate production function F_t(H,X)=A_1tH+(1-α)α^α/1-α(A_2t/A_1t^α)^1/1-αX, which is linear. Therefore if we define F(H,X)=H+(1-α)α^α/1-αX and (A_Ht,A_Xt) as in the proof of Corollary <ref>, the same argument applies. Land is overvalued in Example <ref> if G_H>G_X. Trivial. §.§ Recurrent stochastic fluctuations We discuss the third implication of Theorem <ref> by specializing it. The production function takes the CES form (<ref>). Let A_t A_Ht/A_Xt be the relative productivity of labor. The state of the economy at time t is denoted by n_t, which evolves over time according to a Markov chain with transition probability matrix Π=(π_nn'), where π_nn'=(n_t=n' | n_t-1=n). The relative productivity A_t evolves over time as a Markov multiplicative process A_t=G_tA_t-1, where G_t conditional on (n_t-1,n_t)=(n,n') is an copy of some random variable G_nn'>0.[See <cit.> for more details.] Let S_n(A) be the value of (<ref>) when (A_0,n_0)=(A,n). Due to the multiplicative nature of shocks and homogeneity, we may write S_n(A)=s_nA^ρ-1 for some constant s_n>0, where ρ=1/σ. A dynamic programming argument shows s_n=1+∑_n'=1^N π_nn'[G_nn'^ρ-1]s_n'. Defining the N× 1 vector s=(s_1,…,s_N)', the vector of ones 1=(1,…,1)', and the N× N nonnegative matrix K=(π_nn'[G_nn'^ρ-1]), we may rewrite (<ref>) as s=1+Ks s=(I-K)^-11. A positive and finite solution to (<ref>) exists if and only if the spectral radius of K (the maximum modulus of all eigenvalues) is less than 1.[This argument is similar to <cit.>.] Therefore we obtain the following proposition. Suppose the production function is CES with elasticity of substitution σ>1 and the relative labor productivity A_t A_Ht/A_Xt follows the Markov multiplicative process (<ref>). Let K=(π_nn'[G_nn'^1/σ-1]). Then land is overvalued if the spectral radius of K is less than 1. As a numerical example, we set β=0.5, α=0.8, σ=1.25, N=2, π_nn'=1/3 if n≠ n', and (G_1n',G_2n')=(1.1,0.95) for all n', which implies that the spectral radius of K is less than 1 and land is overvalued. Figure <ref> shows one simulation for 200 periods. The land price exhibits boom-bust cycles. The price-rent ratio steadily increases, consistent with Theorem <ref>. Proposition <ref> and this numerical example provide the third implication of Theorem <ref>. When productivities increase and remain to be high, land prices will continue to rise relative to the trend, which may look like an emergence of a large land price bubble. Conversely, if productivities decrease and remain to be so for an extended period of time, land prices will fall, which may appear to be a bursting of a land bubble. Thus, land prices exhibit recurrent booms and busts driven by fluctuations in productivities. Nonetheless, as long as the relative productivity growth of land is low, land will always be overvalued, with the size of land overvaluation fluctuating over time and a steady upward trend in the price-rent ratio. Our model provides a theoretical foundation for recurrent stochastic bubbles. § CONCLUDING REMARKS This paper has studied the long-run behavior of land prices in a modern economy. We have established the Land Overvaluation Theorem showing the surprising link between unbalanced growth, elasticity of substitution, and land overvaluation in an economy with aggregate risk. This Theorem provides new insights on both short-run and long-run behaviors of land prices. Unlike the conventional view that land overvaluation (sometimes called land bubbles) may occur only as short-run phenomena, our paper shows that it will naturally and necessarily arise along the process of economic development with unbalanced growth. Moreover, driven by stochastic fluctuations in productivities, land prices experience large swings, with expansions and contractions in the size of land overvaluation that may appear to be the emergence and collapse of large land bubbles. To derive these results, unbalanced growth together with elasticity of substitution exceeding 1 plays an important role. Although unbalanced growth may not seem to be common in the standard growth models, it is a general feature in the growth process in reality because different sectors or production factors have different growth rates. At the same time, in reality, as economies develop, the importance of land as a factor of production usually diminishes, yet its role as a store of value continues to be high. What our Theorem shows is that under such circumstances, land overvaluation will arise as the equilibrium outcome. In this sense, we believe that the results of our paper capture an important aspect of the modern economy. Finally, we would like to add two remarks. In the present paper, for simplicity we only considered an overlapping generations model with exogenous growth. However, whether growth is exogenous or endogenous is not important for asset overvaluation. In a parallel work, <cit.> show that growth and asset overvaluation endogenously emerge as the leverage of entrepreneurs is relaxed. They also employ a Bewley-type model with infinitely-lived agents, implying that the overlapping generations structure is inessential. In our model, land is the primary store of value and overvaluation necessarily occurs in land. If there are multiple assets that serve as a store of value (such as gold and cryptocurrency), the extent of overvaluation in individual assets could be indeterminate. Nonetheless, the aggregate amount of overvaluation and the equilibrium outcome are determinate, as in the present paper. However, as noted in the introduction, we think land is a focal point as a store of value due to its characteristics. Moreover, in the macro-finance model of <cit.> with credit frictions, there are means of savings other than land, which are lending to other economic agents or investing in capital. Even in that setting, although there is no aggregate risk, land overvaluation necessarily emerges as the unique equilibrium outcome under sufficiently lax leverage. § PROOFS The first-order condition for profit maximization implies (<ref>) and (<ref>). Define the return on land by R_t+1=P_t+1+r_t+1/P_t. Then the budget constraints (<ref>) can be combined into one as z_t+1=R_t+1(w_t-y_t). Suppressing the time subscripts and substituting into the objective function, the young seek to maximize (1-β)log y+[log z]=(1-β)log y+βlog (w-y)+β[log R]. Clearly this function is strictly concave in y and achieves a unique maximum characterized by the first-order condition 1-β/y-β/w-y=0 y=(1-β)w, which is (<ref>). Since in equilibrium we have x_t=1, the land price satisfies P_t=P_tx_t=w_t-y_t=β w_t, which is (<ref>). Since F is homogeneous of degree 1, F_H is homogeneous of degree 0. Therefore differentiating both sides of F(λ H,λ X) =λ F(H,X), F_H(λ H,λ X) =F_H(H,X) with respect to λ and setting λ=1, we obtain HF_H+XF_X =F, HF_HH+XF_HX =0. Let h=log(H/X). Using the definition (<ref>) and (<ref>), we obtain 1/σ_F =∂/∂ hlogF_X(X^h,X)/F_H(X,^h,X)=X^h F_HX/F_X-X^h F_HH/F_H =HF_HX/F_X-HF_HH/F_H=HF_HX/F_X+XF_HX/F_H =F_HX/F_HF_X(HF_H+XF_X)=FF_HX/F_HF_X. Let X=1 and h=log H. Using (<ref>) and applying l'Hôpital's rule, we obtain lim sup_H→∞ρ(H,1)=lim sup_H→∞log(F_X/F_H)/log H=1+lim sup_H→∞logF_X/HF_H/log H. Therefore to prove the claim, it suffices to show F_X≤ HF_H for large enough H. Since X=1 and F is homogeneous of degree 1, we have F=HF_H+F_X, so 1/H(HF_H-F_X)=1/H(2HF_H-F)=2F_H-F/H→ 2m-m=m>0, implying F_X<HF_H for large enough H. We prove Theorem <ref> by establishing a series of lemmas. Let A>0 and suppose that σ_F(H,1)≥σ for H≥ A. Let ρ=1/σ. If A_H/A_X≥ A, then F_X/F_H(A_H,A_X)≤F_X/F_H(A,1)A^-ρ(A_H/A_X)^ρ. By Assumption <ref>, F is homogeneous of degree 1. Therefore F_H,F_X are homogeneous of degree 0, and so is ρ(H,X) in (<ref>). Let B A_H/A_X≥ A. Setting H=^h and X=1 in (<ref>), we obtain ρ(^h,1)=/ hlogF_X/F_H(^h,1). Integrating both sides from h=log A to h=log B and applying the intermediate value theorem for integrals, there exists h_1∈ (log A,log B) such that ρ(^h_1,1)log (B/A) =∫_log A^log Bρ(^h,1) h =logF_X/F_H(B,1)-logF_X/F_H(A,1). Taking the exponential of both sides of (<ref>), letting M (F_X/F_H)(A,1), and using the homogeneity of F_H,F_X, we obtain F_X/F_H(A_H,A_X)=F_X/F_H(B,1)=M (B/A)^ρ(^h_1,1). Since B≥ A and ρ(^h_1,1)≤ρ 1/σ, it follows that F_X/F_H(A_H,A_X)≤ M (B/A)^ρ=MA^-ρ(A_H/A_X)^ρ, which is (<ref>). In equilibrium, the fundamental value of land is bounded above as V_t≤ w_t_t[∑_s=1^∞r_t+s/w_t+s]. The stochastic discount factor between time t and t+1 equals the marginal rate of substitution m_t→ t+1 β/z_t+1/(1-β)/y_t=β y_t/(1-β)z_t+1 =β w_t/β w_t+1+r_t+1≤w_t/w_t+1, where the last line uses (<ref>) and r_t+1≥ 0. Then we can bound the stochastic discount factor between time t and t+s from above as m_t→ t+s∏_j=0^s-1m_t+j→ t+j+1≤w_t/w_t+s. Therefore we can bound the fundamental value of land from above as V_t_t[∑_s=1^∞ m_t→ t+sr_t+s]≤_t[∑_s=1^∞w_t/w_t+sr_t+s]=w_t_t[∑_s=1^∞r_t+s/w_t+s]. We have lim_t→∞V_t/P_t=0 almost surely. By (<ref>) and Lemma <ref>, we have 0≤V_t/P_t≤1/β_t[∑_s=1^∞r_t+s/w_t+s]. Therefore to show the claim, it suffices to show that _t[∑_s=1^∞ r_t+s/w_t+s]→ 0 almost surely as t→∞. By Assumption <ref>, we can take a constant A>0 such that σ(H,1)≥σ>1 for all H≥ A. Let A_t A_Ht/A_Xt and ρ=1/σ∈ (0,1). Since the expectation of the infinite sum (<ref>) is finite, the sum converges with probability 1 and hence we must have A_t^ρ-1→ 0 and A_t→∞ because ρ<1. In particular, there exists T>0 such that A_t≥ A for t≥ T. For such t, by Lemma <ref> we have r_t/w_t=F_X(A_Ht,A_Xt)A_Xt/F_H(A_Ht,A_Xt)A_Ht≤F_X/F_H(A,1)A^-ρA_t^ρ-1. Therefore _t[∑_s=1^∞r_t+s/w_t+s]≤F_X/F_H(A,1)A^-ρ_t∑_s=1^∞ A_t+s^ρ-1. Letting t→∞ and using condition (<ref>), we obtain _t[∑_s=1^∞ r_t+s/w_t+s]→ 0 almost surely as t→∞. The absence of arbitrage and the definition of the fundamental value imply P_t=_t[m_t→ t+1(P_t+1+r_t+1)], V_t=_t[m_t→ t+1(V_t+1+r_t+1)]. Taking the difference, we obtain P_t-V_t=_t[m_t→ t+1(P_t+1-V_t+1)]. Iterating this equation and applying the law of iterated expectations, we obtain P_t-V_t=_t[m_t→ t+s(P_t+s-V_t+s)]. Lemma <ref> implies V_t+s/P_t+s→ 0 almost surely as s→∞ and hence P_t+s>V_t+s for large enough s with probability 1. Therefore P_t>V_t for all t, and land is overvalued.
http://arxiv.org/abs/2307.02073v1
20230705073053
Performance Modeling of Data Storage Systems using Generative Models
[ "Abdalaziz Rashid Al-Maeeni", "Aziz Temirkhanov", "Artem Ryzhikov", "Mikhail Hushchyn" ]
cs.LG
[ "cs.LG", "cs.AI", "cs.PF" ]
Department: Head Editor: Name, xxxx@email HSE University Department HeadPaper title High-precision modeling of systems is one of the main areas of industrial data analysis. Models of systems, their digital twins, are used to predict their behavior under various conditions. We have developed several models of a storage system using machine learning-based generative models. The system consists of several components: hard disk drive (HDD) and solid-state drive (SSD) storage pools with different RAID schemes and cache. Each storage component is represented by a probabilistic model that describes the probability distribution of the component performance in terms of IOPS and latency, depending on their configuration and external data load parameters. The results of the experiments demonstrate the errors of 4–10 % for IOPS and 3–16 % for latency predictions depending on the components and models of the system. The predictions show up to 0.99 Pearson correlation with Little's law, which can be used for unsupervised reliability checks of the models. In addition, we present novel data sets that can be used for benchmarking regression algorithms, conditional generative models, and uncertainty estimation methods in machine learning. Performance Modeling of Data Storage Systems using Generative Models Abdalaziz Rashid Al-Maeeni, Aziz Temirkhanov, Artem Ryzhikov, and Mikhail Hushchyn August 1, 2023 ====================================================================================== Introduction Data storage systems (DSS) are of vital importance in the modern world. The emergence of big data has necessitated the development of storage systems with a larger capacity and lower costs to efficiently store and process the vast amounts of data generated <cit.>. Performance modeling is essential in the development of such systems. The modeling allows engineers to analyze the system's behavior under different conditions and helps them to find the optimal design of the system. It can be used for marketing purposes to estimate the system performance under the customer's requirements. One more application is diagnostics and predictive maintenance, where model predictions are compared with actual measurements to detect failures and anomalies. The key components of a typical DSS are controllers, fast cache memory, and storage pools. All data are stored in the pools consisting of several hard disk drives (HDDs) or solid-state drives (SSDs) united with raid schemes. The cache speeds up read and write operations with the most popular data blocks. And the controllers are responsible for data management and provide computational resources to process all input and output requests from users. An early attempt to model SSDs was described in <cit.>. The authors argue that HDD performance models cannot be used for SSD due to certain unique SSD characteristics, such as low latency, slow update, and expensive block-level erasure. The black-box approach to model SSD performance is suggested as it requires minimal a priori information about the device. The researchers found that although black-box simulation does not produce high-quality predictions for HDDs, it can produce accurate ones for SSDs. Another paper presents a black-box approach based on regression trees to model performance metrics for SSDs <cit.>. The resulting model can accurately predict latency, bandwidth, and throughput with mean relative errors of 20 %, 13 %, and 6 %, respectively. The authors of paper <cit.> propose SSDcheck, a novel SSD performance model, which is capable of extracting various internal mechanisms and predicting the latency of the next access to commodity SSDs. SSDcheck can dynamically manage the model to accurately predict latency and extract useful internal mechanisms to fully exploit SSDs. Additionally, the paper presents multiple practical cases evaluated to show significant performance improvement in various scenarios. The proposed use cases also leverage the performance model and other features to achieve an improvement of up to 130 % in overall throughput compared to the baseline. Machine learning techniques have proved to be useful in the task of analyzing the disks. As an example, the main approaches discussed in <cit.> are based on machine learning. The paper uses a methodology that extracts low-cost history-aware features to train SSD performance models which predict response times to requests. The paper also utilizes space-efficient data structures such as exponentially decaying counters to track past activity with O(1) memory and processing cost. Lastly, the paper uses machine learning models such as decision trees, ensemble methods, and feedforward neural networks to predict the storage hardware response time. All of these approaches are designed to make accurate SSD simulators more accessible and easier to use in real-world online scenarios. The authors of paper <cit.> developed an open-source disk simulator called IMRSim to evaluate the performance of two different allocation strategies for interlaced magnetic recording (IMR) technology using a simulation designed in the Linux kernel space using the Device Mapper framework and offers a scalable interface for interaction and visualization of input and output (I/O) requests. Another open-source simulator was developed in <cit.>. The authors introduce SimpleSSD, a high-fidelity holistic SSD simulator. Aside from modeling a complete storage stack, this simulator decouples SSD parallelism from other flash firmware modules, which helps to achieve a better simulation structure. SimpleSSD processes I/O requests through layers of a flash memory controller. The requests are then serviced by the parallelism allocation layer, which abstracts the physical layout of interconnection buses and flash disks. While being aware of internal parallelism and intrinsic flash latency, the simulator can capture close interactions among the firmware, controller, and architecture. The idea of SimpleSSD has been further developed in <cit.>. SimpleSSD 2.0, namely Amber, was introduced to run in a full system environment. Being an SSD simulation framework, Amber models embedded CPU cores, DRAMs, and various flash technologies by emulating data transfers. Amber applies parallelism-aware readahead and partial data update schemes to mimic the characteristics of real disks. Research that used different Flash Translation Layer (FTL) schemes was presented in <cit.>. FlashSim, an SSD simulator, has been developed to evaluate the performance of storage systems. The authors analyzed the energy consumption in different FTL schemes to find realistic workload traces and validated FlashSim against real commercial SSDs and detected behavioral similarities. Although simulating SSD devices might have seemed an exciting research topic, older kinds of storage system has also received the scientific attention it deserves. One of the papers presents a series of HDD simulations to investigate the stiffness of the interface between the head and the disk. In <cit.>, the authors develop a finite-element model of an HDD using a commercially available three-dimensional modeling package. The results of the numerical analysis gave insights into the characteristics of each type of shock applied to an HDD model. Although used extensively in modern times, SSDs are not designed perfectly. There is a certain performance-durability trade-off in their design. Disks of this type require garbage collection (GC), which hinders I/O performance, while the SSD block can only be erased a finite number of times. There is research that proposes an analytical model that optimizes any GC algorithm <cit.>. The authors propose the randomized greedy algorithm, a novel GC algorithm, which can effectively balance the above-mentioned trade-off. In the search for the perfect architecture of a modern SSD through design space exploration (DSE), researchers have come across the challenge of fast and accurate performance estimation. In <cit.>, the authors suggest two ways to tackle this challenge: scheduling the task graph and estimation based on neural networks (NN). The proposed NN regression model is based on training data, which consists of hardware configurations and performance. The paper concludes that the NN-based method is faster than scheduling-based estimation and is preferred for scalable DSE. The evaluation of SSD performance has also been investigated in <cit.>. SSDs are built using an integrated circuit as memory, so manufacturers are interested in I/O traces of applications running on their disks. This paper proposes a framework for the accurate estimation of I/O trace execution time on SSDs. All of these solutions have a range of limitations. The first one is that they are focused mostly on the modeling of one single device such as HDD or SSD disks. The second is that most of them are based on the simulation of physical processes and the firmware stack inside the devices which require more resources for development. But one of the main disadvantages is that the models do not take into account vendor specifics of the devices due to the trade secrets. In result, errors of the performance simulation are quite high for practical use. Finally, HDDs and SSDs are part of DSS with unique architectures and software that influence the device performance and are not taken into account in the solutions described above. In this work, we present a data-driven approach for DSS modeling which does not have these limitations. All vendor specifics, architecture features, and software impact are learned directly from real performance measurements of the system. We present the performance modeling and analysis of a data storage cache, SSD, and HDD pools under random and sequential data loads. We provide novel data sets of IOPS and latency measurements for the system's components for various configurations and load parameters. We consider two different approaches to learn distributions of the performance values, based on parametric and nonparametric generative models. We analyze the quality of the model's predictions and compare them with a naive baseline algorithm. We also demonstrate a physics-inspired method for checking the reliability of the model predictions. The data sets provided in this work can be used for benchmarking regression algorithms, conditional generative models, and uncertainty estimation methods in machine learning. § PROBLEM STATEMENT In this study, we consider the simulation of HDD and SSD storage pools, and cache, which are the key parts of data storage systems. The goal of this work is to predict the performance of these components for a given configuration and data load parameters. We describe the performance by the number of input and output operations per second (IOPS) and the average of their latencies. The data load is parameterized by: load type, IO type, read fraction, block size, number of jobs, and queue depth for each of these jobs. There are two load types we consider in the work: random and sequential. Each data load consists of a mixture of read and write operations. The IO type indicates whether we consider read or write operations in our simulations. The read fraction is the ratio of the number of read operations to the total number of all operations in the data load. Each operation processes one data block of a given size. And the data loads are generated by several jobs with a given queue depth. HDD and SSD storage pools also have configuration parameters. They are the total number of disks in a pool and the RAID scheme, which is described by the number of data and parity blocks. In this work, the cache has only one configuration, and we do not describe it by any parameter. The lists of input and output features in our simulation models are shown in Table <ref>. Let us denote the vector of input features as x, and the vector of outputs as y. Also suppose we have n pairs of measurements {x_i, y_i}_i=1^n of output vectors y_i for the given inputs x_i. These pairs are obtained by generating various data loads on a real data storage system and measuring the performances of its pool or cache in terms of IOPS and latency. The goal of this work is to estimate the conditional probability distribution p(y|x) for the storage component using machine learning. Then, we predict the performance ŷ_j for the unknown inputs x_j by sampling from the learned distribution: ŷ_j ∼ p(y|x_j). The advantage of this data-driven approach is that we do not need to understand and simulate all the physical processes inside the storage components. The model only needs a sample of measurements. Machine learning algorithms estimate all physical dependencies from the data. We can use this model to explore and visualize the performance dependencies from the configuration and data load parameters, to predict the average performance and its deviations for new conditions, and to analyze many other properties. § DATA COLLECTION For this study, we collected four data sets for the storage pools and cache under different data loads. The first one is for the cache under the random data loads. To collect it, we generated 510 different data loads using a performance analysis tool Perf <cit.> in Linux. Table <ref> provides the list of the load parameters and the ranges of their values. We quasi-randomly selected these values for each individual data load using the Sobol sequence <cit.>. This method covers the parameter space more uniformly than random or grid sampling. Each load lasted 60 seconds, and for every second we measured IOPS and average latency for read and write operations separately. In the results, we collected 510×60×2 pairs of measurements (x_i, y_i). Similarly, we collected data sets for the SSD pool under random loads. The list of all parameters and their value ranges is in Table <ref>. In this case, we also used the pool configuration parameters in addition to the data load ones. The first is a RAID scheme, which is defined by the number of data (K) and parity (M) blocks. The second is the total number of disks in the pool. We generated 512 different loads in total for various pool configurations with a duration of 120 seconds each. We measured IOPS and average latency for read and write operations separately every second. In the results, this data set contains 512×120×2 pairs of measurements (x_i, y_i). We also collected two data sets for SSD and HDD storage pools under sequential data loads. We used the same procedure described above. The main difference is that only pure loads with read fractions of 0 and 100 % are valuable. One more difference is that larger block sizes are used for the sequential loads. Table <ref> and Table <ref> show the full list of the data load and configuration parameters and their value ranges. We used the Sobol sequence to generate 512 different loads in total for various pool configurations with a duration of 120 seconds each. We measured IOPS and average latency for read and write operations every second. In the results, each data set contains 512×120 pairs of measurements (x_i, y_i). § MODELS §.§ CatBoost model The first model we use in this study is a parametric generative model based on the CatBoost <cit.> regression model. Relations between IOPS and latencies within the same data load are defined by Little's law <cit.>: Q × J = IOPS_read× Latency_read + + IOPS_write× Latency_write, where Q is the queue depth and J is the number of jobs. While the fraction of read operations is fixed during each data load, Little's law allows us to suppose the following relation for read or write operations: log IOPS ∼ -log Latency. All measurements of IOPS and latencies are stochastic. We approximate the distribution of their logarithm values by conditional 2D normal distributions: ẑ_i = logŷ_i, ẑ_i ∼𝒩(μ̂(x_i), Σ̂(x_i)), where ŷ_i is a vector of predictions for IOPS and latency; the mean μ̂(x_i) and the covariance matrix Σ̂(x_i) depend on input vector x_i of data load and configuration parameters, and are predicted by the CatBoost regression model. Figure <ref> shows the model. As described in the previous section, a sample contains 2m read and write data loads with k measurements {x_i, y_i}_i=0^k in each for just read or write operations. The total number of measurements is n=2mk. We calculate the mean vectors μ_j and the covariance matrices Σ_j for each of these data loads. In addition, we use Cholesky decomposition <cit.> for the matrices Σ_j^-1=L_jL_j^T. It is used to ensure that the predicted covariance matrices will be positive semi-definite. We fit the CatBoost regression model with the MutliRMSE loss function defined as: L^2 = 1/2m∑_j=1^2m (μ̂(x_j)-μ_j)^2 + + 1/2m∑_j=1^2m (L̂(x_j)-L_j)^2, where Σ̂^-1(x_i)=L̂(x_j)L̂(x_j)^T. The model consists of 5000 decision trees. The optimal hyperparameters values of the CatBoost regressor are estimated using a grid search for each storage system component. §.§ Normalizing flow The second model we apply in this work is based on Normalizing Flows (NF) <cit.>. Consider a sample with n measurements {x_i, y_i}_i=1^i=n for various data loads and configurations of a data storage component. Let us also define a latent random variable z with the standard normal distribution q(z) = 𝒩(0, I). The goal of the NF model is to learn an invertible transformation between the vector of measured IOPS and latency y_i into the latent variable z_i with the given x_i: z_i = f(y_i, x_i). The change of variable theorem determines the relation between the estimated performance p̂(y_i|x_i) and the latent variable q(z_i) distributions: p̂(y_i|x_i) = q(f(y_i, x_i)) | ∂ f(y_i, x_i)/∂ y_i|. In this work, we use the real-valued non-volume preserving (Real NVP) NF model <cit.>, where the function f(y_i, x_i) is designed using a chain of neural networks. The model is fitted by maximizing the log-likelihood function: L = 1/n∑_i=1^nlogp̂(y_i|x_i) →max_f. Predictions of the performance values are performed as ŷ_i=f^-1(z_i, x_i), where z_i is a randomly sampled vector from the q(z) distribution. An illustration of the NF model is shown in Figure <ref>. In this case, ŷ_i is sampled from the learned distribution p̂(y_i|x_i) of the IOPS and latency with the given data load and configuration parameter values. In our experiments, we use sixteen Real NVP transformations, where two simple 2-layer fully connected neural networks are exploited in each transformation. The model uses the Adam optimizer and the tanh activation function. It trains for 80 epochs with a batch size of 200 and a learning rate η = 10^-2 §.§ kNN model Consider a new vector x^* of inputs for which we need to predict IOPS and latency values. As previously, assume that our train sample contains 2m read and write data loads with k measurements in each. The total number of observations is n=2mk {x_i, y_i}_i=0^i=n. All these data loads are described by 2m unique input vectors U={u_i}_j=1^j=2m. Then, for the new input x^* the k Nearest Neighbours (kNN) algorithm searches for the nearest vector u^*: u^* = min_u_j ∈ U d(x^*, u_j), where d(x, z) is the distance between two input vectors. In this work, we used the Euclidean distance. The predictions are performed as follows. The model takes for the predictions all k observations y_i for which x_i=u^*: ŷ^* = {y_i | x_i = u^*}. In other words, the model finds the closest known data load to the new one and takes its measured IOPS and latencies as predictions. We use this approach as a baseline in the study. This helps to estimate whether the CatBoost and the NF models based on machine learning algorithms provide better prediction results than the naive approach based on the kNN algorithm. § QUALITY METRICS Consider the measurements {x_i, y_i}_i=0^k for just read or write operations in one single data load. According to the data sets description, this sample contains 120 (60 for cache) measurements, where all x_i have the same input feature values, and y_i = (IOPS_i, Latency_i)^T is a vector of measured performance values. Also, suppose that we have predictions {x_i, ŷ_i}_i=0^k from one of the models for the same x_i. The goal is to estimate the discrepancy between the distributions of y_i and ŷ_i. The first quality metric we use is the Percentage Error of Mean (PEM), which describes the ability of the models to predict mean values of IOPS and latency for each data load: PEM = |μ̂ - μ/μ|× 100%, μ = 1/k∑_i=1^k y_i. Similarly, we use the Percentage Error of Standard deviation (PES), which describes the ability of the models to predict standard deviations of IOPS and latency values for each data load: PES = |σ̂ - σ/σ|× 100%, σ = √(1/k-1∑_i=1^k (y_i-μ)^2). The means (μ) and standard deviations (σ) of the measured IOPS and latencies for cache, SSD, and HDD pools under random and sequential data loads are shown in Figure <ref>. We also use two additional metrics to measure the distances between the distributions of y_i and ŷ_i. The first is the Fréchet distance (FD) <cit.>, which is widely used to estimate the quality of generative models <cit.>. We suppose that the vectors y_i and ŷ_i have 2D Gaussian distributions 𝒩(μ, Σ) and 𝒩(μ̂, Σ̂) respectively. Then, the FD is defined as: FD = μ -μ̂^2_2 + + tr(Σ + Σ̂ - 2(ΣΣ̂)^1/2 ). One more metric is the Maximum Mean Discrepancy (MMD) <cit.>, which is defined as: MMD = 1/k^2∑_i=1^k∑_j=1^k K(y_i, y_j) + +1/k^2∑_i=1^k∑_j=1^k K(ŷ_i, ŷ_j) - - 2/k^2∑_i=1^k∑_j=1^k K(y_i, ŷ_j), where K(u, v) = C(σ) exp(-|u - v|^2/2σ^2) is the Radial Basis Function (RBF) with normalization constant C(σ) and σ equals the median distance between the vectors in the combined sample y_i and ŷ_i. In our study, IOPS and latency have different scales which affect the calculation of the FD and MMD. To solve this problem, we fit the Standard Scaler <cit.> on the real observations {y_i}_i=1^k and apply it to transform the values of y_i and ŷ_i. Then, the scaled vectors are used for the FD and MMD estimation. We split data loads in each data set into train and test samples. Test one contains 100 random data loads with all their measurements {x_i, y_i}. For each model, we calculate the quality metrics on each individual data load. Then, we use the bootstrap technique to find their average and standard deviation over all loads in the test sample. § RESULTS In this section, we consider the experiments we have conducted to estimate the quality of the models[<https://github.com/HSE-LAMBDA/digital-twin>]. We fitted all models described above on the same train samples, made predictions on the same test samples, and calculated quality metrics from the previous section. The metrics values for the cache, SSD, and HDD pools are presented in Tables <ref>, <ref>, <ref>, and <ref>. The models in our study learn the conditional distributions of IOPS and latency. Their predictions are samples from these distributions. Each metric highlights different quality aspects of the prediction, which we detail below. Table <ref> provides the metrics values for the kNN, CatBoost, and NF models for the cache under random data loads. Figure <ref> shows an example of the predictions for a data load from the test sample. The PEM and PES metrics in the table show the estimation quality of the means and standard deviations, respectively, for IOPS and latency in each data load. The observed means and standard deviations are presented in Figure <ref>. The NF model demonstrates the best PEM values, where the prediction errors are about 4.1 % and 2.9 % for IOPS and latency, respectively. It is about 6 times smaller than the 25 % and 17.8 % errors for the kNN model. Similarly, CatBoost demonstrates the smallest PES values of 23.9 % and 24.1 % for IOPS and latency, respectively. The NF model has the most significant errors of 412 % and 299 % for IOPS and latency. Generally, the results show that the standard deviation estimation is a challenging task for all models in our study. But the NF predicts the widest distributions, as demonstrated in Figure <ref>. FD compares the mean values and the covariance matrices of the predictions and real observations. The metric value for the CatBoost and NF is approximately 15 and 18 times smaller than for the kNN, respectively but still quite large. This can be explained by the following two reasons. The first is the standard scaling transformation that we applied before the metric calculation. It was fitted on real observations and applied to the predictions as we described in the previous section. FD estimates the distance between two distributions on a scale, where the observed IOPS and latencies have zero means and standard deviations of 1. The second reason is that the cache is the fastest component in the data storage system. The standard deviations of its IOPS and latency measurements are relatively small compared to the mean values for other components of the system, as shown in Figure <ref>. So, the calculated distances between the observed and predicted distributions are larger for the cache than for other components. The MMD metric compares two distributions by calculating distances between pairs of points. The distances are divided by the median values, which makes the metric more robust to different scales of the distributions. The results show that the NF model has the best MMD value. Table <ref> presents the metrics values for the kNN, CatBoost, and NF models for the SSD pool of the system under random data loads. Figure <ref> shows an example of the predictions for a data load from the test sample. The results demonstrate that the CatBoost model is the best in terms of the FD, PEM, and PES metrics, and NF is the best in terms of the MMD metric. The CatBoost model predicts the mean of IOPS and latency values with errors of 8.9 % and 7.4 % respectively, compared with 38 % and 19.2 % for the kNN. The standard deviations of IOPS and latency are estimated with errors of 22 % and 25 % for CatBoost and 52 % and 40 % for the kNN model. Similarly to the cache, the NF model predicts wider distributions for SSD pools under random data loads as it is reflected in PES values. However, the model estimates the IOPS and latency distributions better in terms of the MMD metric. We explain this behavior as follows. NF learns wider distributions than other models. Despite these distributions having worse PES values, they cover the observations better than other models. Similarly, Table <ref> shows the metrics values for the SSD pool under sequential data loads. Figure <ref> shows an example of the predictions for a data load from the test sample. The results demonstrate that the CatBoost model is the best in terms of the FD, and PES metrics, and the NF is the best in terms of the MMD and PEM metrics. The best predictions of the mean of IOPS and latency values have errors of 10.2%, 10.7%, and 9.9%, 8.1% respectively for CatBoost and NF, compared to 31% and 42% for the kNN. The standard deviations of IOPS and latency are estimated with errors of 37% and 42% for CatBoost and 90% and 101% for the kNN model. Similarly to the cache, the NF model predicts wider distributions for SSD pools under sequential data loads as it is reflected by PES values. However, the model estimates the distributions better in terms of the MMD metric. Finally, Table <ref> provides the metrics values for the HDD pool of the system under sequential data loads. Figure <ref> shows an example of the predictions for a data load from the test sample. The results demonstrate that the CatBoost model is the best in terms of the FD, PEM, and PES metrics, and the NF is the best in terms of the MMD metric. The best predictions of the mean of IOPS and latency values have errors of 10.6% and 16.0% respectively for CatBoost, compared with 27% and 49% for kNN. The standard deviations of IOPS and latency are estimated with errors of 18% and 23% for CatBoost and 33% and 60% for the kNN model. The NF model predicts the widest distributions, as reflected in the PES values. However, it better estimates the distributions in terms of the MMD metric. Generally, the results demonstrate that the NF and CatBoost significantly outperform the kNN models on all data sets. The CatBoost model has better metrics values on all data sets. The NF shows the worst results for the standard deviation estimation, but the best values of the MMD metric. To verify the reliability of the models, we conducted an additional experiment. Relations between IOPS and latencies within the same data load are defined by Little's law <cit.>: Q × J = IOPS_read× Latency_read + + IOPS_write× Latency_write, where Q is the queue depth and J is the number of jobs. For each data load in test samples, we calculated the right part of this equation and compared it with the left part estimated from the load parameters. The results are provided in Figure <ref>. The plots demonstrate that Little's law is satisfied for the real observations in all data sets used in this study as well as for predictions of CatBoost and NF models. They learn the dependency in Equation <ref> directly from the data. Table <ref> provides Pearson's correlation coefficients between the measurements in Figure <ref>. It shows that the coefficients for real observations, and predictions of CatBoost and NF models are similar, and support the reliability of the models. The kNN demonstrates larger differences from Little's law than other models. Figure <ref> also shows that CatBoost and NF predictions for several data loads deviate significantly from the law. In such cases, the prediction errors are too large and we cannot trust them in decision-making. Therefore, Equation <ref> can be used to verify the predictions of the models and to filter the predictions that are too bad. § DISCUSSION The results in the previous section show that generative models are suitable for performance modeling tasks. They are able to simulate the performance of a data storage system and its components for the given data load and configuration of the system. The models learn the conditional distribution of IOPS and latency, which can be used to predict their average values, as well as to estimate the variance of the predictions, confidence intervals, and other useful statistics. Samples with high error values tend to be near the boundary of the training data space, where the model operates in an extrapolation regime. An example of this behavior in the HDD predictions is presented in Figure <ref>, where one test data sample is indicated by the crosshairs (dashed lines), which is zoomed in in Figure <ref>. The predictions for this particular sample are also shown in this plot. This sample demonstrates what we call model conservatism, which is an instance of a model's predictions being biased and shifted towards the training data in case the test data sample is the extrapolation regime. This figure represents the predictions being shifted toward the test samples, which shows that the model is consistent with the data on which it was trained. The same pattern of behavior is demonstrated in Figure <ref>. Here, the crosshairs are pointed on the test sample which is focused on in Figure <ref> together with the predictions. It is observed that the predictions are located close to the test samples with the latter being in the extrapolation regime. This is another example of the model conservatism. § CONCLUSION This work shows the results of the performance modeling of a data storage system using generative models. The outcomes help to conclude the following statements: * Generative models are reasonable alternatives to other methods for performance modeling studies. This approach is suitable for single devices and their combinations. * Both parametric and nonparametric models demonstrate similar prediction qualities for mean IOPS and latency values. But the parametric model estimates the standard deviations better. * The models can be used to predict the performance of a data storage system and its components for the given data load and configuration parameters. * The task has an unsupervised way for prediction reliability check based on Little's law. The models we consider in this paper demonstrate this liability. * We provide real measurements of IOPS and latency for the cache, SSD, and HDD pools of a data storage system. This data set can be used in future works in the field of data-driven performance modeling and studies related to conditional generative models, uncertainty estimation, and model reliability. Scripts of all our experiments in this work and all data sets are provided in the GitHub repository[<https://github.com/HSE-LAMBDA/digital-twin>]. § ACKNOWLEDGMENTS The publication was supported by the grant for research centers in the field of AI provided by the Analytical Center for the Government of the Russian Federation (ACRF) in accordance with the agreement on the provision of subsidies (identifier of the agreement 000000D730321P5Q0002) and the agreement with HSE University No. 70-2021-00139. The computation for this research was performed using the computational resources of HPC facilities at HSE University <cit.>. Abdalaziz R. Al-Maeeni is a Ph.D. student and a Junior Research Fellow at the Faculty of Computer Science, HSE University, Russia. His main research interests are interpretable generative models and inverse design. Contact him at al-maeeni@hse.ru. Aziz Temirkhanov is a Master's student at the Faculty of Computer Science, HSE University, Russia. His main research interests are generative models and its application in natural science. Artem Ryzhikov is a Ph.D. student and a Junior Research Fellow at the Faculty of Computer Science, HSE University, Russia. His research interests are in the area of machine learning and its application to High-Energy Physics with a focus on Deep Learning and Bayesian methods. Mikhail Hushchyn is a Senior Research Fellow at the Faculty of Computer Science, HSE University, Russia. His main research interests include the application of machine learning and artificial intelligence methods in the natural sciences and industry. He is the corresponding author. Contact him at mhushchyn@hse.ru.
http://arxiv.org/abs/2307.02815v1
20230706072005
Early stage of Erythrocyte Sedimentation Rate test: Fracture of a high-volume-fraction gel
[ "Thomas John", "Lars Kaestner", "Christian Wagner", "Alexis Darras" ]
cond-mat.soft
[ "cond-mat.soft" ]
The Grid-Minor Theorem Revisited [ August 1, 2023 ================================ Erythrocyte Sedimentation Rate (ESR) is a clinical parameter used as a non-specific marker for inflammation, and recent studies have shown that it is linked to the collapse of the gel formed by red blood cells (RBCs) at physiological hematocrits (i.e. RBC volume fraction). Previous research has suggested that the delay time before the sedimentation process is related to the formation of fractures in the gel. Moreover, RBC gels present specific properties due to the anisotropic shape and flexibility of the RBCs. Namely, the onset of the collapse is reached earlier and the settling velocity of the gel increases with increasing attraction between the RBCs, while gel of spherical particles show the opposite trend. Here, we report experimental observations of the gel structure during this onset and suggest an equation modeling this initial process as fracturing of the gel. We demonstrate that this equation provides a model for the motion of the interface between blood plasma and the RBC gel, along the whole time span. We also observe that the increase in the attraction between the RBCs modifies the density of fractures in the gel, which explains why the gel displays a decrease in delay time when the aggregation energy between the RBCs increases. Our work uncovers the detailed physical mechanism underlying the ESR and provides insights into the fracture dynamics of a RBC gel. These results can improve the accuracy of clinical measurements. § INTRODUCTION The Erythrocyte Sedimentation Rate (ESR) is a blood test that measures how quickly red blood cells settle in a test tube, and has been used for centuries to diagnose and monitor inflammatory diseases <cit.>. It is a non-specific test that is sensitive to increases in fibrinogen and other plasma components <cit.>. Recent research has shown that it may also be useful in detecting abnormally-shaped red blood cells <cit.>. Despite its widespread use, the physical mechanisms governing the ESR are not yet fully understood. It has recently been demonstrated that the cause of this sedimentation is the gravitational collapse of the percolating network, also known as gel, formed by the RBCs <cit.>. Similarly to colloidal gels, this collapse presents an initial delay time, during which no or negligible sedimentation is observed <cit.>. The origin of colloidal gel sedimentation delay is still debated, however it is likely to be associated with gel aging and the development of cracks for fluid flow within the gel <cit.>. Surprisingly, contrary to colloidal hard spheres suspensions, an increase in attractive interactions between RBCs results in gel destabilization, leading to faster structure rearrangement and apparition of cell-depleted cracks, which collapses faster <cit.>. This feature likely contributed to the establishment of the ESR as a medical tool, as a shorter delay time and an increased collapse velocity are additive for the typical medical read-out, which considers the average velocity of the interface during the first hour <cit.>. In this study, we conducted experiments at different length scales to investigate the dominant mechanism of the fracture process in RBC gels, and compare it to a theoretical model from prior literature <cit.>. We demonstrated that higher RBC aggregation energy results in more fractures in the gel. Moreover, we derived a new equation for the delay part of our previous model for the macroscopic interface velocity <cit.>. These fundamental findings can be used to extract more rigorous and reproducible parameters from erythrocyte sedimentation rate measurements in clinical context <cit.>. § MICROSCOPIC EXPERIMENTS §.§ Microscopic scale observations of the fracture We performed experiments using light sheet microscopy (Z1, Zeiss, Jena, Germany) as described in a previous methodological publication <cit.>. This technique allows enough resolution to extract the velocity field of the RBCs in the obtained image sequences using Particle Image Velocimetry (PIV, through PIVLab <cit.>), but only probes a small part of the gel close to its lateral edge. More accurately, only a depth of approximately 100  m on an area around 1 mm^2 could be observed, while the whole cylindric sample has a height around 3 cm and a diameter of 1.6 mm. The area where the PIV could reliably be performed is even smaller, since absorption and diffraction of the laser light decrease the overall intensity of the picture around 700 m from the border of the sample. As illustrated in Fig.<ref>, and displayed in Supplementary Movies S1 and S2, when repeating experiments with samples from the same suspension, we obtained qualitatively different behaviors of the velocity field, even though the global interface velocity is reproducible. Specifically, we noted instances where the gel fluidized within the field of view, while in other cases, it exhibited a cohesive behavior, resembling a solid translation. However, we also extracted a more global parameter by extracting the velocity of the interface of the RBC gel. In order to accomplish this, we detect the position of the interface by locating the height with the strongest vertical intensity falling edge. This is achieved by averaging the vertical intensity over a horizontal width of 250 m at each point to ensure both robust and accurate detection of the interface position. The average velocity reached by the interface is reproducible within experimental accuracy, which implies that the macroscopic dynamics of the entire sample is reproducible. Notably, velocities significantly higher than the interface velocity are observed within the gel when the velocity field is not homogeneous (Fig. <ref>(a-e)). However, when a nearly solid translation of the RBC gel is observed in the field of view (Fig. <ref>(b-f)), the peak velocity observed in the velocity probability density function matches the interface velocity. This suggests that the RBC gel undergoes partial fluidization from an initial streamer, but this fluidization does not spread throughout the entire sample. Consequently, the rest of the sample follows the overall compaction of the structure, which ultimately determines the surface velocity. In order to confirm and generalize these conclusions, we also performed observations at bigger scales, as described in the next sections. §.§ Mesoscopic scale observation of the fracture To investigate the larger-scale structure of RBC gels, we utilized microscopy with infrared light transmission through thin samples. We followed a similar procedure outlined in a previous paper which used blue light, <cit.> however, we replaced the blue LED source with a halogen lamp (Nikon, LHS-H100P-1). The emitted light was filtered by an infrared long pass filter with a cutting wavelength of 950 m (Neewer, IR950). Using infrared light provided us with higher transmission through RBCs, revealing greater detail in the structure than the blue light, which was less sensitive to the thickness of the sample. We conducted experiments with an adjusted hematocrit of ϕ=0.45 and various dilutions of autologous plasma with serum. Serum can be considered as plasma without fibrinogen, as the coagulation cascade occurs prior to serum extraction. This method allowed us to dilute the fibrinogen content while retaining other plasma proteins, which effectively tunes the attractive forces between RBCS <cit.>. We observed significant differences in the RBC gel structure for relative concentration of plasma in the liquid phase varying from 0.7 (e.g. 0.7 mL of plasma are mixed with 0.3 mL of serum) to 1 (RBCs are suspended in pure plasma). We were able to image an area of almost 1 cm×0.87 cm in the samples, with a total width × height × thickness of approximately 1.7 cm×7 cm×150 m. As shown in Fig.<ref>(a-d) and Supplementary Movies S3 to S6, this setup revealed that during the initial stages of the sedimentation process, the RBC gel fractures, resulting in vertically oriented streamers at various horizontal intervals. To quantify the evolution of the characteristic distance D between the streamers, we computed the position of the first non-zero maxima of the horizontal auto-correlation of the image. Those quantity shows strong fluctuation, however after a transition time of approximately 4500 s it shows a clear decreasing trend for all fibrinogen concentrations, see Fig. <ref>(e). With increasing fibrinogen concentrations, i.e. stronger RBC interactions, the distance D between the streamers decreased significantly, as shown in Fig.<ref>(f). In summary, our experimental observations demonstrated that the RBC gel is locally fluidized into streamers at the initial stage of its sedimentation, in a process similar to the observations reported in simulations of the onset of colloidal gel settling <cit.>. However, the fluidization of the structure never occurs over the whole sample. Eventually, the network of streamers stabilizes, i.e. the streamers stop forming or growing, and the RBC network undergoes a smoother reorganization, which can be described as the compression of a porous material <cit.>. §.§ Macroscopic scale measurements At larger scales, one observes a motion of a sharp interface between cell-free plasma and sedimenting RBCs. The average velocity of this interface over the first hour is actually the parameter measured to perform an ESR test. This measurement is typically done by assessing the position of the interface at the beginning of the experiments and after one hour of leaving the sample at rest. To complete our observations with macroscopic data, we conducted experiments similar to those in a previous study <cit.>, where we manipulated the concentration of fibrinogen by mixing serum and plasma for suspensions of RBCs, with the hematocrit held constant at ϕ=0.45. The experimental setup is depicted in Figure <ref>(a), with image analysis data presented in Figure <ref>(b,c). For a further detailed explanation of the picture post-processing, please refer to our earlier methodological publication <cit.>; The velocity points in Fig.<ref>(b) are extracted as follows. We first compute the finite difference between two successive height measurements (Fig.<ref>(c)), divided by the time interval of 60 s between two consecutive pictures. The resulting values are fitted by a smoothing spline. The smoothing spline is then evaluated at the time of the images to obtain the open circles in Fig.<ref>(b). § MODELING EQUATIONS AND COMPARISON TO MACROSCOPIC MEASUREMENTS §.§ Equations of interface motion In a previous manuscript by Varga et al. <cit.>, the instability that causes the formation and spread of a fluidized portion of a settling gel, known as a 'streamer', was investigated both analytically and numerically. However, their model contains several geometrical constants and parameters that are difficult to estimate experimentally. In this paper, we present an approach for simplifying their model by using only the leading terms to derive a system of differential equations that can be fitted to experimental data with just two fit parameters. To ensure clarity, we will begin by defining the key parameters of the gel. The red blood cells (RBCs) in the gel have a characteristic radius of a≈4 m and experience surface attraction that is described by a potential well with a depth of Uof (10 to 20) and a width Δ between (10 to 100)nm <cit.>. The surrounding fluid has a viscosity of η≈10^-3 Pa s <cit.>, and both the RBCs and fluid molecules have thermal energy . However, the density difference ρ between the cells and the fluid is 10 to 100 kg/m^3 <cit.>. Additionally, we will refer to the geometrical constants d_i introduced by Varga et al. <cit.> in their model. The growth of the streamer radius R over time t is determined by Eq. (2.10) in Varga et al. <cit.>: dR/dt =K/ϕ^1/3 R^-1/3 e^R/R^*, where  K =2d_1d_2/9a^1/3Ue^-U/(d_3)/ηΔ^2 and  R^* =2/d_4/ρ g a^2 Δ. From its definition, R^* can be described as an effective gravitational length. Assuming that all dimensionless geometrical constants d_i are of the order of unity, we can estimate K∈[10^-11 to 10^-7] m^4/3/s and R^*∈[10^-5 to 10^-4] m. We assume that the plasma flows mainly through the streamer, i.e. that the permeability κ of the undisturbed gel is negligible. This assumption is consistent with the fact that the initial velocity of the gel interface is below the experimental resolution. We also disregard the inverse Navier's slip length, λ, at the border of the streamers. Under these assumptions, Eq. (2.13) of Varga et al.<cit.> determines the average upward plasma velocity, ⟨ u_f⟩, as ⟨ u_f⟩=K_2 R^4, with K_2=π/8ρ g/η D^2, where the distance D is assumed by Varga et al. to be equal to the cross-sectional length of the sample L. Note that this expression of K_2 is valid only if there is a single streamer, as observed in numerical simulations performed on a spatially limited system. However, we experimentally observed several streamers, as illustrated in Fig. <ref>(a-d). This implies that we should rather consider D to be the characteristic distance between two neighboring streamers at the interface of the gel, i.e. D^2=L^2/N, with N the number of streamers which are distributed all over the cross-section area of the sample. Assuming volume conservation, the velocity of the interface can then be expressed as dh/dt=-⟨ u_f⟩(1-ϕ)/ϕ=-K_2 R^4 (1-ϕ)/ϕ, where the average volume fraction ϕ of the gel is dependent on both the initial volume fraction ϕ_0 and height h_0, due to volume conservation of the RBCs. Specifically, ϕ can be expressed as ϕ=ϕ_0 h_0/h. Experimentally, we observed that the growth of streamer radius R is limited over time. Intermediate-scale experiments (Fig. <ref>, Movies S3-6) indeed showed that the diameters of the streamers saturate, and their positions stabilize. The discrepancy could be attributed to one of the assumptions made in Varga et al.'s model <cit.>, which states that the volume fraction of particles inside the streamer is constantly similar to the bulk volume fraction. This approximation is explicitly mentioned when they assess the number of particles in a streamer, and implicitly used when they assessed the value of the flux of particles per unit area into and out of the open streamer j_in and j_out (Eqs. (2.7) and (2.8) in <cit.>). However, both their numerical simulations and our experiments demonstrate that the volume fraction within the streamers decreases. Since both fluxes are proportional to 3ϕ/(4π a^3) , if the volume fraction ϕ inside the streamers decreases significantly, these fluxes should therefore vanish, which explains that the streamers stabilize and the gel interface reaches a maximal velocity. As shown by Darras et al. <cit.>, once the maximal velocity and underlying structure of the gel are achieved, the collapse of the RBC gel can be modeled as a porous medium compressing under its own weight. The velocity of the interface is described as dh/dt=-ρ g a^2/γη(ϕ_m-ϕ)^3/ϕ(1-ϕ), where γ is a dimensionless characteristic time of the system, and ϕ_m is the maximal volume fraction reached by the RBCs in the gel at the end of the sedimentation process. In summary, the interface velocity can be expressed as -dh/dt=min K_2 R(t)^4 (1-ϕ)/ϕ ρ g a^2/γη(ϕ_m-ϕ)^3/ϕ(1-ϕ) , with R(t) from Eq. (<ref>). This equation can be fitted to macroscopic experimental data, using in total four fit parameters K, K_2, γ and ϕ_m. §.§ Model fitting To numerically solve Eq.(<ref>), an initial value R(t=0) is required. However, the choice of this value has little effect on the time when dR/dt diverges as long as R_0≪ R^*. This is because analytical results from Varga et al. also predict a finite time divergence of dR/dt for R(0)=0 (see their Eq.(2.15) and Fig. 7 in <cit.>). Therefore, we used R(0)=1 m, which is the same order of magnitude as the holes observed in 2D percolating networks of RBCs <cit.>. We used R^* as a fit parameter, initially constrained in the range of [10^-5 to 10^-3] m based on estimations from d_4=1 and ρ∈[10 to 100] kg/m^3, which led to R^*∈[5.×10^-6 to 5.×10^-4] m. However, the choice of R^* in this interval did not have a significant influence on the sum of the square residuals of the initial fits we tried. The obtained values of R^* all fell within the range of (3.0±0.1) 10^-4 m. Therefore, we used R^*=3.10^-4,m as a fixed parameter for all fits. As the Eq.(<ref>) is highly non-linear, the success of the fit convergence depends on the initial guess of the other fit parameters. To simplify this problem, we first estimated the values of ϕ_m and γ using the previous approximation that h(t)=h_0 if t<t_0 <cit.>. These estimated values of ϕ_m and γ were then used as initial guesses for the fit of Eq.(<ref>). Additionally, we set the initial value of K_2 such that both time derivatives in Eq.(<ref>) were equal at R=50 m. This R=50 m roughly corresponds to the radius of the depleted areas observed for the full plasma sample (Fig.<ref>(a), Movie S3). The parameter K is calculated through its definition in Eq.(<ref>) with d_1=d_2=1, Δ=10^-8 m, and U=15, which yielded K≈10^-9 m^3/4/s as the initial guess. The fits obtained using this protocol are in good agreement with measured data, as shown in Fig. <ref>(b,c). The parameter K_2 exhibits a significant trend as a function of the fibrinogen concentration, as illustrated in Fig. <ref>(d), while the values of K are almost constant within the range (7±2) 10^-10 m^4/3/s (see Supp. Fig. S1). The behaviors of γ and ϕ_m are consistent with the trends reported in <cit.>, see also Supp. Fig. S1. As expected, due to the decrease in interdistance D between the streamers as the concentration of fibrinogen increases, we observe that K_2∝ D^-2 increases with increasing fibrinogen concentration. This change in the geometry of the gel structure is probably related to the change in pore size that static 2D networks of RBCs exhibit when their interaction energy is modified <cit.>. Indeed, a higher amount of bigger pores in the initial network implies more probable seeding points for the fracture of the gel. The porosity of the collapsing gel therefore increase with an increase in the fibrinogen concentration. § IMPLICATIONS FOR CLINICAL ESR TESTS In current medical applications, only a two time-points measurement is considered to estimate the average sedimentation velocity during the first hour of the sedimentation process. Some of the current features of the standard ESR measurement are that there is no lower bound for the normal range, and no correction as a function of the sample hematocrit is universally recognized <cit.>. Previous clinical studies have already highlighted that the lack of lower bound is related to the fact that the maximum velocity is often reached after the first hour, where the measurement is done <cit.>. Since both time derivatives in Eq.(<ref>) have a different dependency with ϕ, it is now clear from our model that a simple scaling of the one time-point measurement can not be rigorously obtained. However, if one records regularly the position of the interface over a longer period of time (approx. 2h, as already suggested in some protocols <cit.>), as is now easily enabled by automation, one can extract the maximum velocity |dh/dt|, which scales as |dh/dt|∝(ϕ_m-ϕ)^3/ϕ(1-ϕ) <cit.>. This more detailed analysis protocol could therefore provide a lower bound for the normal range of ESR, which could be used as a clinical tool to detect rare diseases, such as neuroacanthocytosis syndromes, which presents a significantly slower ESR <cit.>. § CONCLUSIONS The experiments performed at various length scales have revealed that the initial collapse of the RBC gel is initiated by local instabilities that lead to the appearance of multiple streamers. The spatial distribution of these streamers depends on the interaction energy between the cells. This is consistent with earlier observations that higher aggregation between RBCs results in larger pore sizes, thereby increasing the possible seeding points for the emergence of streamers within the bulk. The increase in the average distance between the streamers qualitatively explains the macroscopic characteristics of the collapse: with higher cell aggregation, more streamers appear and the gel collapses sooner. On a fundamental level, these results lead to a continuous model for the gravitational collapse of a gel with a delay time. We have successfully connected the microscopic rearrangements of the gel structure to the macroscopic velocity of the gel interface. It is worth noting that the physical peculiarities of RBC aggregates are crucial for the aforementioned mechanisms. Indeed, the increase in delay time with higher aggregation energy is due to the geometry of the RBC aggregates, which therefore contributes to making them a unique suspension. Understanding these peculiarities is of immense significance, particularly in clinical contexts. The present results provide a systematic approach to extract detailed parameters of the erythrocyte sedimentation rate, which can lead to a more rigorous and precise description of the sedimentation velocity than the current clinical standard, which only considers the average sedimentation velocity during the first hour by a two time-points measurement. § MATERIAL AND METHODS Blood sample collection and experiments were approved by the “Ärztekammer des Saarlandes”, ethics votum 51/18, and performed after informed consent was obtained according to the Declaration of Helsinki. Blood was collected in standard EDTA-anticoagulated blood, as well as standard serum tubes. § ACKNOWLEDGMENTS This work was supported by the research unit FOR 2688 - Wa1336/12 of the German Research Foundation, and by the Marie Skłodowska-Curie grant agreement No. 860436—EVIDENCE. A.D. acknowledges funding by the Young Investigator Grant of the Saarland University. The authors gratefully acknowledge Prof. Paulo E. Arratia (University of Pennsylvania) for fruitful discussions on the manuscript. abbrv
http://arxiv.org/abs/2307.01798v1
20230704160818
Edge-aware Multi-task Network for Integrating Quantification Segmentation and Uncertainty Prediction of Liver Tumor on Multi-modality Non-contrast MRI
[ "Xiaojiao Xiao", "Qinmin Hu", "Guanghui Wang" ]
eess.IV
[ "eess.IV", "cs.CV", "cs.LG" ]
Department of Computer Science, Toronto Metropolitan University, Toronto, Canada Edge-aware Multi-task Network for Integrating Quantification Segmentation and Uncertainty Prediction of Liver Tumor on Multi-modality Non-contrast MRI Xiaojiao Xiao1Qinmin Hu1 Guanghui Wang1 Corresponding author: Guanghui Wang (wangcs@torontomu.ca) August 1, 2023 ====================================================================================================================================================== Simultaneous multi-index quantification, segmentation, and uncertainty estimation of liver tumors on multi-modality non-contrast magnetic resonance imaging (NCMRI) are crucial for accurate diagnosis. However, existing methods lack an effective mechanism for multi-modality NCMRI fusion and accurate boundary information capture, making these tasks challenging. To address these issues, this paper proposes a unified framework, namely edge-aware multi-task network (EaMtNet), to associate multi-index quantification, segmentation, and uncertainty of liver tumors on the multi-modality NCMRI. The EaMtNet employs two parallel CNN encoders and the Sobel filters to extract local features and edge maps, respectively. The newly designed edge-aware feature aggregation module (EaFA) is used for feature fusion and selection, making the network edge-aware by capturing long-range dependency between feature and edge maps. Multi-tasking leverages prediction discrepancy to estimate uncertainty and improve segmentation and quantification performance. Extensive experiments are performed on multi-modality NCMRI with 250 clinical subjects. The proposed model outperforms the state-of-the-art by a large margin, achieving a dice similarity coefficient of 90.01±1.23 and a mean absolute error of 2.72±0.58 mm for MD. The results demonstrate the potential of EaMtNet as a reliable clinical-aided tool for medical image analysis. § INTRODUCTION Simultaneous multi-index quantification (i.e., max diameter (MD), center point coordinates (X_o, Y_o), and Area), segmentation, and uncertainty prediction of liver tumor have essential significance for the prognosis and treatment of patients <cit.>. In clinical settings, segmentation and quantitation are manually performed by the clinicians through visually analyzing the contrast-enhanced MRI images (CEMRI) <cit.>. However, as shown in Fig.<ref>(b), Contrast-enhanced MRI (CEMRI) has the drawbacks of being toxic, expensive, and time-consuming due to the need for contrast agents (CA) to be injected <cit.>. Moreover, manually annotating medical images is a laborious and tedious process that requires human expertise, making it manpower-intensive, subjective, and prone to variation <cit.>. Therefore, it is desirable to provide a reliable and stable tool for simultaneous segmentation, quantification, and uncertainty analysis, without requiring the use of contrast agents, as shown in Fig.<ref>(a). Recently, an increasing number of works have been attempted on liver tumor segmentation or quantification <cit.>. As shown in Fig.<ref> (c), the work <cit.> attempted to use the T2FS for liver tumor segmentation, while it ignored the complementary information between multi-modality NCMRI of T2FS and DWI. In particular, there is evidence that diffusion-weighted imaging (DWI) helps to improve the detection sensitivity of focal lesions as these lesions typically have higher cell density and microstructure heterogeneity <cit.>. The study in <cit.> attempted to quantify the multi-index of liver tumor, however, the approach is limited to using multi-phase CEMRI that requires the injection of CA. In addition, all these works are limited to a single task and ignore the constraints and mutual promotion between multi-tasks. Available evidence suggests that uncertainty information regarding segmentation results is important as it guides clinical decisions and helps understand the reliability of the provided segmentation. However, current research on liver tumors tends to overlook this vital task. To the best of our knowledge, although many works focus on the simultaneous quantization, segmentation, and uncertainty in medical images (i.e., heart <cit.>, kidney <cit.>, polyp <cit.>). No attempt has been made to automatically liver tumor multi-task via integrating multi-modality NCMRI due to the following challenges: (1) The lack of an effective multi-modality MRI fusion mechanism. Because the imaging characteristics between T2FS and DWI have significant differences (i.e., T2FS is good at anatomy structure information while DWI is good at location information of lesions <cit.>). (2) The lack of strategy for capturing the accurate boundary information of liver tumors. Due to the lack of contrast agent injection, the boundary of the lesion may appear blurred or even invisible in a single NCMRI, making it challenging to accurately capture tumor boundaries <cit.>. (3) The lack of an associated multi-task framework. Because segmentation and uncertainty involve pixel-level classification, whereas quantification tasks involve image-level regression <cit.>. This makes it challenging to integrate and optimize the complementary information between multi-tasks. In this study, we propose an edge-aware multi-task network (EaMtNet) that integrates the multi-index quantification (i.e., center point, max-diameter (MD), and Area), segmentation, and uncertainty. Our basic assumption is that the model should capture the long-range dependency of features between multi-modality and enhance the boundary information for quantification, segmentation, and uncertainty of liver tumors. The two parallel CNN encoders first extract local feature maps of multi-modality NCMRI. Meanwhile, to enhance the weight of tumor boundary information, the Sobel filters are employed to extract edge maps that are fed into edge-aware feature aggregation (EaFA) as prior knowledge. Then, the EaFA module is designed to select and fuse the information of multi-modality, making our EaMtNet edge-aware by capturing the long-range dependency of features maps and edge maps. Lastly, the proposed method estimates segmentation, uncertainty prediction, and multi-index quantification simultaneously by combining multi-task and cross-task joint loss. The contributions of this work mainly include: (1) For the first time, multi-index quantification, segmentation, and uncertainty of the liver tumor on multi-modality NCMRI are achieved simultaneously, providing a time-saving, reliable, and stable clinical tool. (2) The edge information extracted by the Sobel filter enhances the weight of the tumor boundary by connecting the local feature as prior knowledge. (3) The novel EaFA module makes our EaMtNet edge-aware by capturing the long-range dependency of features maps and edge maps for feature fusion. The source code will be available on the author's website. § METHOD The EaMtNet employs an innovative approach for simultaneous tumor multi-index quantification, segmentation, and uncertainty prediction on multi-modality NCMRI. As shown in Fig.<ref>, the EaMtNet inputs multi-modality NCMRI of T2FS and DWI for capturing the feature and outputs the multi-index quantification, segmentation, and uncertainty. Specifically, the proposed approach mainly consists of three steps: 1) The CNN encoders for capturing feature maps and the Sobel filters for extracting edge maps (Section 2.1); 2) The edge-aware feature aggregation (EaFA) for multi-modality feature selection and fusion via capturing the long-distance dependence (Section 2.2); and 3) Multi-task prediction module (Section 2.3). §.§ CNN encoder for feature extraction In Step 1 of Fig. <ref>, the multi-modality NCMRI (i.e., χ_T2^i∈ R^H× W, χ _DWI^i∈ R ^H× W) are fed into two parallel encoders and the Sobel filter to extract the feature maps (i.e., g _T2^i∈ R ^H× W× N, g _DWI^i∈ R ^H× W × N) and the corresponding edge maps (i.e., edge _T2^i∈ R ^H× W, edge _DWI^i∈ R ^H× W) respectively. Specifically, EaMtNet employs UNet as the backbone for segmentation because the CNN encoder has excellent capabilities in low-range semantic information extraction <cit.>. The two parallel CNN encoders have the same architecture where each encoder contains three shallow convolutional network blocks to capture features of adjacent slices. Each conv block consists of a convolutional layer, batch normalization, ReLU, and non-overlapping subsampling. At the same time, EaMtNet utilizes the boundary information extracted by the Sobel filter <cit.> as prior knowledge to enhance the weight of tumor edge information to increase the awareness of the boundary. §.§ Edge-aware feature aggregation(EaFA) for multi-modality feature selection and fusion In Step 2 of the proposed model, the feature maps (i.e., g _T2^i, g _DWI^i) and the edge maps (i.e., edge_T2^i, edge_DWI^i) are fed into EaFA for multi-modality feature fusion with edge-aware. In particular, the EaFA makes the EaMtNet edge-aware by using the Transformer to capture the long-range dependency of feature maps and edge maps. Specifically, the feature maps and edge maps are first flattened to the 1D sequence corresponding to X_1D∈ R^N× P^2 and E_1D∈ R^2× Q^2, respectively. Where N= 2× C means the channel number C of the last convolutional layer from the two parallel encoders. (P, P) and (Q, Q) represent the resolution of each feature map and each edge map, respectively. On the basis of the 1D sequence, to make the feature fusion with edge awareness, the operation of position encoding is performed not only on feature maps but also on edge maps. The yielded embeddings Z∈ R^N× P^2+2× Q^2 can serve as the input sequence length for the multi-head attention layer in Transformer. The following operations in our EaFA are similar to the traditional Transformer <cit.>. After the three cascade Transformer layers, the EaFA yields the fusion feature vector F for multi-task prediction. The specific computation of the self-attention matrix and multi-head attention are defined below <cit.>: Attention(𝒬, 𝒦, 𝒱)=softmax(𝒬𝒦^𝒯/√(d_k))(𝒱) MultiHead(𝒬, 𝒦, 𝒱)=Concat(head_1,...,head_h)𝒲^𝒪 head_i=Attention(𝒬𝒲^𝒬_i, 𝒦𝒲^𝒪_i, 𝒱𝒲^𝒱_i) where query 𝒬, key 𝒦, and value 𝒱 are all vectors of the flattened 1D sequences of X_1D and E_1D. 𝒲^𝒪_i is the projection matrix, and 1/√(d_k) is the scaling factor. §.§ Multi-task prediction In Step 3 of Fig. <ref>, the EaMtNet outputs the multi-modality quantification ŷ_Q (i.e., MD, X_o, Y_o and Area), segmentation result ŷ_s and uncertainty map û_i. Specifically, for the quantification path, ŷ_Q is directly obtained by performing a linear layer to the feature F from EaFA. For the segmentation and uncertainty path, the output feature F from EaFA is first reshaped into a 2D feature map F^ out. Then, to scale up to higher-resolution images, a 1× 1 convolution layer is employed to change the channel number of F^ out for feeding into the decoder. After upsampling by the CNN decoder, EaMtNet predicts the segmentation result ŷ_s with H× W and uncertainty map û_i with H× W. The CNN decoder contains three shallow deconv blocks, which consist of deconv layer, batch normalization, and ReLU. Inspired by <cit.>, we select the entropy map as our uncertainty measure. Given the prediction probability after softmax, the entropy map is computed as follows: H[x]=-∑_i=1^Kz_i(x)∗ log_2(z_i(x)) where z_i is the probability of pixel x belonging to category i. When a pixel has high entropy, it means that the network is uncertain about its classification. Therefore, pixels with high entropy are more likely to be misclassified. In other words, its entropy will decrease when the network is confident in a pixel's label. Under the constraints of uncertainty, the EaMtNet can effectively rectify the errors in tumor segmentation because the uncertainty estimation can avoid overconfidence and erroneous quantification <cit.>. Moreover, the EaMtNet novelly make represent different tasks in a unified framework, leading to beneficial interactions. Thus, the quantification performance is improved through back-propagation by the joint loss function L_multi-task. The function comprises segmentation loss L_seg and quantification loss L_qua, where the loss function L_seg is utilized for optimizing tumor segmentation, and L_qua is utilized for optimization of multi-index quantification. It can be defined as: L_Dice=2∑_i^N y_iŷ_s/∑_i^S y_i^2+∑_i^Nŷ_s^2 L_qua ( ŷ_task^i, y_task^i )=∑_i=1| y_task^i- ŷ_task^i | where ŷ_s represents the prediction, and y_i represents the ground truth label. The sum is performed on S pixels, ŷ_task^i represents the predicted multi-index value, and y_task^i represents the ground truth of multi-index value, task ∈ {MD, X, Y, Area}. § EXPERIMENTAL RESULTS AND DISCUSSION For the first time, EaMtNet has achieved high performance with the dice similarity coefficient (DSC) up to 90.01±1.23%, and the mean absolute error (MAE) of the MD, X_o, Y_o and Area are down to 2.72±0.58 mm,1.87±0.76 mm, 2.14±0.93 mm and 15.76±8.02 cm^2, respectively. §.§.§ Dataset and configuration. An axial dataset includes 250 distinct subjects, each underwent initial standard clinical liver MRI protocol examinations with corresponding pre-contrast images (T2FS [4mm]) and DWI [4mm]) was collected. The ground truth was reviewed by two abdominal radiologists with 10 and 22 years of experience in liver imaging, respectively. If any interpretations demonstrated discrepancies between the reviewers, they would re-evaluate the examinations together and reach a consensus. To align the paired images of T2 and DWI produced at different times. We set the T2 as the target image and the DWI as the source image to perform the pre-processing of non-rigid registration between T2 and DWI by using the Demons non-rigid registration method. It has been widely used in the field of medical image registration since it was proposed by Thirion <cit.>. We perform the Demons non-rigid registration on an open-source toolbox DIRART using Matlab 2017b. Inspired by the work <cit.>, we set the scaling factor d_k to 64 in equation (1). All experiments were assessed with a 5-fold cross-validation test. To quantitatively evaluate the segmentation results, we calculated the dice coefficient scores (DSC) metric that measures the overlapping between the segmentation prediction and ground truth <cit.>. To quantitatively evaluate the quantification results, we calculated the mean absolute error (MAE). Our EaMtNet was implemented using Ubuntu 18.04 platform, Python v3.6, PyTorch v0.4.0, and running on two NVIDIA GTX 3090Ti GPUs. §.§.§ Accurate segmentation. The segmentation performance of EaMtNet has been validated and compared with three state-of-the-art (SOTA) segmentation methods (TransUNet <cit.>, UNet <cit.>, and UNet++ <cit.>). Furthermore, to ensure consistency in input modality, the channel number of the first convolution layer in the three comparison methods is set to 2. The visual examples of liver tumors are shown in Fig.<ref>, it is evident that our proposed EaMtNet outperforms the three SOTA methods. Some quantitative analysis results are shown in Tab.<ref> and Tab.<ref>, our network achieves high performance with the DSC of 90.01±1.23% (5.39% higher than the second-best). The results demonstrate that edge-aware, multi-modality fusion, and uncertainty prediction are essential for segmentation. §.§.§ Ablation study. To verify the contributions of edge-aware feature aggregation (EaFA) and uncertainty, we performed ablation study and compared and performance of different networks. First, we removed the EaFA and used concatenate, meaning we removed fusion multi-modality (No-EaFA). Then, we removed the uncertainty task (No-Uncertainty). The quantitative analysis results of these ablation studies are shown in Tab.<ref>. Our method exhibits high performance in both segmentation and quantification, indicating that each component of the EaMtNet plays a vital role in liver tumor segmentation and quantification. §.§.§ Performance comparison with state-of-the-art. The EaMtNet has been validated and compared with three SOTA segmentation methods and two SOTA quantification methods (i.e., ResNet-50 <cit.> and DenseNet <cit.>). Furthermore, the channel number of the first convolution layer in the two quantification comparison methods is set to 2 to ensure the consistency of input modalities. The visual segmentation results are shown in Fig.3. Moreover, the quantitative results (as shown in Tab.<ref>) corresponding to the visualization results (i.e., Fig. <ref>) obtained from the existing experiments further demonstrate that our method outperforms the three SOTA methods. Specifically, compared with the second-best approach, the DSC is boosted from 84.62±1.45% to 90.01±1.23%. The quantitative analysis results are shown in Tab.<ref>. It is evident that our method outperforms the two SOTA methods with a large margin in all metrics, owing to the proposed multi-modality fusing and multi-task association. § CONCLUSION In this paper, we have proposed an EaMtNet for the simultaneous segmentation and multi-index quantification of liver tumors on multi-modality NCMRI. The new EaFA enhances edge awareness by utilizing boundary information as prior knowledge while capturing the long-range dependency of features to improve feature selection and fusion. Additionally, multi-task leverages the prediction discrepancy to estimate uncertainty, thereby improving segmentation and quantification performance. Extensive experiments have demonstrated the proposed model outperforms the SOTA methods in terms of DSC and MAE, with great potential to be a diagnostic tool for doctors. § ACKNOWLEDGEMENTS. This work is partly supported by the Natural Sciences and Engineering Research Council of Canada (NSERC) and TMU FOS Postdoctoral Fellowship. splncs04
http://arxiv.org/abs/2307.04648v1
20230706154205
Can ChatGPT's Responses Boost Traditional Natural Language Processing?
[ "Mostafa M. Amin", "Erik Cambria", "Björn W. Schuller" ]
cs.CL
[ "cs.CL", "cs.AI" ]
Department: Affective Computing and Sentiment Analysis Editor: Erik Cambria, Nanyang Technological University University of Augsburg; SyncPilot GmbH Nanyang Technological University University of Augsburg; Imperial College London The employment of foundation models is steadily expanding, especially with the launch of ChatGPT and the release of other foundation models. These models have shown the potential of emerging capabilities to solve problems, without being particularly trained to solve. A previous work demonstrated these emerging capabilities in affective computing tasks; the performance quality was similar to traditional Natural Language Processing (NLP) techniques, but falling short of specialised trained models, like fine-tuning of the RoBERTa language model. In this work, we extend this by exploring if ChatGPT has novel knowledge that would enhance existing specialised models when they are fused together. We achieve this by investigating the utility of verbose responses from ChatGPT about solving a downstream task, in addition to studying the utility of fusing that with existing NLP methods. The study is conducted on three affective computing problems, namely sentiment analysis, suicide tendency detection, and big-five personality assessment. The results conclude that ChatGPT has indeed novel knowledge that can improve existing NLP techniques by way of fusion, be it early or late fusion. Can ChatGPT's Responses Boost Traditional Natural Language Processing? Björn W. Schuller ====================================================================== With the recent rapid growth of foundation models <cit.> as large language models (LLMs) <cit.>, a potential has appeared for emerging capabilities <cit.> of such models to perform new downstream tasks or solve new problems, that they were not particularly trained on in the first place. This includes models like GPT-3.5 <cit.>, GPT-4 <cit.>, LLaMA <cit.>, and RoBERTa <cit.>. The capabilities of such foundation models are being explored in various domains, like affective computing <cit.>, Neural Machine Translation (NMT) <cit.>, agents playing games <cit.>, sentiment analysis <cit.>, and general artificial intelligence <cit.>. The phenomenon of emerging capabilities of LLMs <cit.> was more pronounced with the utilisation of fine-tuning techniques like Reinforcement Learning with Human Feedback (RLHF), as it was employed in InstructGPT <cit.>, which was later included in GPT-3.5 and GPT-4 models, the main underlying models of ChatGPT. In a previous study <cit.>, we studied the emerging capabilities of ChatGPT to solve affective computing problems, as compared to specialised models trained on a particular problem. The study has indeed shown the emergence of such capabilities in affective computing problems (<cit.>) like sentiment analysis, suicide tendency detection, and personality traits assessment. The performance was comparable to classical Natural Language Processing (NLP) models like Word2Vec <cit.>, or Bag-of-Words (BoW) <cit.>, but not better than fine-tuned LLMs like RoBERTa <cit.>. Another issue that was encountered was parsing the results from the responses of ChatGPT, since it frequently formatted the responses differently despite being prompted to respond with a specific format. The aforementioned conclusions had a follow up question, whether foundation models contain novel knowledge that is not acquired by specialised training of NLP models, hence leading to better results in the scenarios when fusing foundation models with specialised models. We mainly investigate this question in this study. The contributions of this paper are as follows: * We introduce how to prompt ChatGPT to give verbose responses that solve affective computing problems, we demonstrate this in sentiment analysis, suicide and depression detection, and big-five personality traits assessment. * We present the utility of employing the verbose responses of ChatGPT when they are processed with traditional NLP techniques. * We introduce how to fuse ChatGPT with existing NLP methods for affective computing, and investigate their different combinations with different fusion methods. The remainder of the paper is organised as follows: in the next section, we discuss related work; then, we introduce our method; afterwards, we present and discuss the results; finally, we propose concluding remarks. Related Work We focus on related work within the area of foundation models in affective-computing-related tasks (in the text domain) or hybrid formulations between foundation models and traditional NLP methods. Both <cit.> explore a fusion between ChatGPT and other transformer-based models for Named Entity Recognition (NER). All of <cit.> investigate the capabilities of ChatGPT on various NLP tasks including affective computing tasks like sentiment analysis or emotion recognition, and others like NER and text summarisation. <cit.> investigates the performance of ChatGPT in several in sentiment analysis and aspect extraction. Method Our method consists of the following components: * Prompting ChatGPT to estimate an affective answer about a given input example, thus having two texts representing a given example, namely the original text and the corresponding response of ChatGPT. * Process any of the two texts via traditional NLP techniques to represent them as static features vectors; we adopt RoBERTa features extracted by the RoBERTa-base LLM <cit.> or normalised BoW count vectors <cit.>. * Train classical machine learning models on these features either by applying early fusion, by concatenating the features then training, or late fusion by training two models and averaging their prediction probabilities. In this section, we present first the datasets for the different affective computing problems. Afterwards, we introduce the prompting of ChatGPT, then the methods for extracting features. Subsequently, we present how we train and tune the machine learning models. Finally, we present a simple baseline based on ChatGPT responses. The pipeline of our method is presented in Figure <ref>. §.§ Datasets We present here the adopted datasets for the three affective computing problems. A summary of their statistics is in Table <ref>. §.§.§ Sentiment Dataset We make use of the Twitter Sentiment140 dataset <cit.> for sentiment analysis.[We acquired the dataset from <https://huggingface.co/datasets/sentiment140>, on 09.02.2023.] The dataset consists of tweets that were collected from Twitter. Tweets are generally very noisy texts. The dataset consists of tweets and the corresponding binary sentiment labels (positive, or negative). The original dataset consists of 1,600,000 Tweets, however, we filtered these down into a total of 28,000 examples. [<https://github.com/mostafa-mahmoud/chat-gpt-fusion-evaluation>] We do not make use of the original Test portion in the dataset, since it consists of only 497 Tweets, and it also contains a `neutral' label unlike the rest of the dataset. We split the original training portion into three parts as shown in Table <ref>. §.§.§ Suicide and Depression Dataset The Suicide and Depression dataset <cit.> was gathered from the platform Reddit. The collection was gathered under different categories (subreddits), namely “depression”, “SuicideWatch”, and “teenagers”.[We acquired the dataset on 28.01.2023 from <https://www.kaggle.com/datasets/nikhileswarkomati/suicide-watch>] The `non-sucide' label was given to the posts from the “teenagers” category, while the remaining texts were given the label `suicide'. After excluding examples longer than 512 characters and downsampling the dataset, we acquired a dataset of size 16,266 that we divide into three portions Train, Dev, and Test as shown in Table <ref>, since the original dataset was not split. §.§.§ Personality Dataset We make use of the First Impressions (FI) dataset (<cit.>) for the personality task[We acquired the dataset on 03.02.2023 from <https://chalearnlap.cvc.uab.cat/dataset/24/description/>]. The big-five personality traits (OCEAN) are the traits used to represent personality, namely, Openness to experience, Conscientiousness, Extraversion, Agreeableness, and Neuroticism. The dataset was gathered by collecting videos from YouTube, and slicing them into 15 seconds clips with one speaker. The personality labels were collected through crowdsourcing using Amazon Mechanical Turk (AMT), by making pair comparisons between different videos. Each personality trait is represented by a continuous regression value within [0, 1]. In our setup, we utilise only the text modality of the entire FI dataset, with its provided split, originating from the transcriptions of the videos. We train regression models (by employing Mean Absolute Error as a loss function) <cit.>, because the continuous labels give a granular estimation of the personality labels. For evaluation, we interpret the predicted regression labels as the probability of the positive class, which is equivalent to binarising the labels with the threshold 0.5. §.§ ChatGPT Prompts To formulate the ChatGPT text modalities, we need to formulate a prompt to ask ChatGPT, in order to obtain a reasonable answer. We formulate a prompt for each specific problem to ask it about the label. First, we design the prompt to ask for a binary label of the corresponding problem, while emphasising narrowing down the answer to only two labels while excluding more `neutral' labels. Similar to a previous work <cit.>, we design the prompts to have the disclaimer It does not have to be fully correct, and ask what is your guess for the answer, instead of What is the answer or Can you guess the answer. This formulation is to avoid ChatGPT from responding that it is not sure about the answer, hence not giving any answer. Unlike <cit.>, we ask ChatGPT to be verbose and explain the reasoning behind the answer, since we are processing that with NLP methods (unlike <cit.>, where the final label was parsed). A last sentence is added to avoid redundant disclaimer in the response of ChatGPT. We make use of the OpenAI API to use ChatGPT[<https://platform.openai.com/docs/guides/gpt/chat-completions-api>], using the the model `gpt-3.5-turbo-0301'. We do not give a system message, we just use the prompt corresponding to the specific problem as the only user message in the input conversation, with the input text of the example. The assistant response is what we use as the response of ChatGPT. We use the default parameters for generation, namely the answer with highest score (n=1), and the temperature parameter T=1.0. The prompts for the given problems are given below, by substituting the input {text}. For the personality traits, we query the API five times for each of the five traits by substituting the {trait}. * The prompt for the sentiment classification: What is your guess for the sentiment of the text “{text}”? Answer positive or negative, but not neutral. Try to narrow down the answer to be one of those two. It does not have to be fully correct. Explain your answer briefly. Do not show any warning after. * The prompt for the suicide detection: What is your guess, is a person saying the text “{text}” has suicide tendencies? Answer yes or no. It does not have to be fully correct. Explain your answer briefly. Do not show any warning after. * The prompt for the personality traits: What is your guess for the personality trait “{trait}”, from the big-five personality traits, of someone who said “{text}”? Answer low or high, but not neutral. Try to narrow down the answer to low or high. It does not have to be fully correct. Explain your answer briefly. Do not show any warning after. Regarding the number of tokens, a token on average gives 4.3 characters. There is an overhead of 8 tokens that gets added for each call to the API. The prompts using an empty input string for `{text}' consist of 63, 50, 75 tokens, for the sentiment, suicide, and personality prompts, respectively. Processing a prompt of total T tokens (system, prompted input and output) took an average of 0.038 T + 1.32 sec. §.§ Text Features In order to process the text, we need to extract features from it. We employ two ways to extract features, one via the LLM RoBERTa <cit.>, and n-gram BoW. §.§.§ RoBERTa Language Model The RoBERTa <cit.> feature set is obtained by the pretrained LLM RoBERTa, which is based on the BERT model with a transformer architecture. The model has two variants; we utilise the smaller variant, namely RoBERTa-base[Acquired on 09.02.2023 from <https://huggingface.co/docs/transformers/model_doc/roberta>]. The model was trained on large datasets with reddit posts and English Wikipedia, and English news <cit.>. In order to extract the embedding for a string, it is first encoded with a subword encoder then fed to the RoBERTa model to give a sequential set of features with attention weights. These are reduced through a pooling layer in the model to produce the final static vector of 768 features representing the given string. §.§.§ Bag of Words The BoW feature set is achieved by constructing n-grams, and then using the classical term-frequency inverse-document-frequency (TF-IDF) to count each term while normalising them by the frequency across all documents <cit.>. For the input texts, we keep only the most common 10,000 words (1-grams), to give a static vector of 10,000 features representing the text. For the responses of ChatGPT, we utilise the most common 2,000 n-grams (n ∈{1,2,3}). The vectors are scaled by the maximum absolute values to be within the range [-1,1]. The reason we utilise n-grams for ChatGPT responses is that, it is common that ChatGPT would give prediction expressions like `high extraversion', or `sentiment is negative'. §.§ Models and tuning Given a feature set (or a fusion thereof) we train a Multi-Layer Perceptron (MLP) <cit.> to predict the final label. We opt to use MLPs, because preliminary experiments showed that MLPs were performing slightly better than Support Vector Machines (SVMs) <cit.> for the given tasks. We construct an MLP with N hidden layers, with U units in the first hidden layer, then each following hidden layer has half the number of neurons of the hidden layer preceding it (we cap this number to be at least 32 units). ReLU is the activation function used for all layers, except for the final layer, where we apply sigmoid to predict the final label within the range [0,1]. We leverage Adam <cit.> as an optimisation algorithm, with a learning rate α. The loss function is either Mean Absolute Error (MAE) for regression training (for personality training), or otherwise negative log likelihood for classification training. We employ the hyperparameter optimisation toolkit SMAC <cit.> to select the best hyperparameters for each problem/dataset and each input modality (or early fusion combinations thereof). We explore 20 hyperparameters samples for each problem. The hyperparameter space has N ∈ [0, 3], U ∈ [64, 512] (log-sampled), and α∈ [10^-6, 10] (log-sampled). §.§ Fusion We deploy early fusion by concatenating the features extracted by RoBERTa or BoW, then training one MLP on the concatenated vector similar to training a single method. On the other hand, the late fusion is achieved by averaging the probabilities predicted by the given methods. §.§ Baseline We employ a simple baseline based on the responses of ChatGPT. In the prompts, we instruct ChatGPT to give a binary label before explaining the answer, hence we construct the baseline to predict a label only if the word corresponding to its class is present in the text. For sentiment analysis, the baseline would predict `positive' only if the response of ChatGPT contains the word `positive', and it would predict `negative' only if the response contains the word `negative'. For suicide detection, the two classification keywords are `yes' and `no'. For personality, the two keywords become `high' and `low'. We exclude the evaluation of responses that include both words or neither, which is roughly only 5% of the Test sets in our experiments. The intuition behind this baseline is that it is similar to parsing the labels from the non-verbose response. Results We experiment the combinations of three main parameters, the text to be used, the corresponding extracted features to represent the text, and how to fuse them. The texts are either the original text or the corresponding response from ChatGPT. The features are either embeddings obtained by RoBERTa, or using normalised count vectors constructed by a simple n-gram BoW approach. The fusion of the models is either done (early) on the feature level, or (late) on the prediction level by averaging the probabilities of the classes. We also include the baseline results. The main results of the experiments are shown in Table <ref>, where we evaluate classification accuracy and Unweighted Average Recall (UAR), which is the unweighted average of the accuracy of classifying each class separately <cit.>. Finally, we refer to the combination of input text (original input or ChatGPT response thereof) and NLP processing technique as a modality. §.§ Discussion First of all, both metrics show a wide agreement on the relative performance of a specific model on a specific problem, the relative order of models on a specific problem based is roughly similar for both metrics. The results of utilisng the original text (for each of the single modalities Text+RoBERTa and Text+BoW) are close to previous work <cit.>, with a slight difference due to the different sampling from the original datasets. The results of the single modality ChatGPT+RoBERTa are decent, comparable to the single modality Text+BoW, but worse than Text+RoBERTa in most cases except for sentiment analysis. The results of ChatGPT+BoW are slightly worse than ChatGPT+RoBERTa. In a similar fashion, these results of ChatGPT are resembling the previous work <cit.>, where ChatGPT was comparable to the Text+BoW modality. Furthermore, the aggregate performances across problems is also similar to <cit.>, where ChatGPT was the most superior in sentiment analysis, whilst most inferior in personality assessment. The results of fusion are inclined to show that the most competent fusion combination is adopting only Text+RoBERTa and ChatGPT+RoBERTa, whether in early or late fusion; however, the early fusion of these two modalities is showing the most superior performance in most scenarios, except the sentiment analysis. Disregarding the specific combination of these two modalities, late fusion is performing better compared to the corresponding instances of early fusion in most cases of the other modality combinations. For instance, the late fusion of all modalities is better than their early fusion; similarly for the combination of Text+RoBERTa and Text+BoW. Consequently, the impact of fusion overall is not very straight forward to explain, because the single modality Text+RoBERTa is the best for the personality assessment, while the early fusion of Text+RoBERTa and ChatGPT+RoBERTa is the best for suicide detection, and the late fusion of all modalities is the best for sentiment analysis. The reason for the superiority of the single modality in the personality assessment is probably due to the poor performance of ChatGPT on the given text, since ChatGPT single modalities are the worst ones. On the other hand, if ChatGPT has a decent performance, then applying fusion has definitely a strong improvement impact, be it early or late fusion. However, the superiority of late fusion against early fusion depends primarily on the problem and the data distribution. From the practical advantages of early fusion, it needs hyperparameter tuning only once, compared to the late fusion which needs to tune a model for each modality. On the other hand, the late fusion has an architectural advantage that it can deploy different training sizes for each modality. For instance, it is possible to train Text+RoBERTa with much larger dataset size, while training ChatGPT+RoBERTa on a smaller dataset size; we will evaluate this in future work. In the previous work <cit.>, the ChatGPT results were labels that were parsed from the non-verbose responses (typically, a binary label like `low' or `high', with some variance in the formatting), whereas in this work we process the verbose response by applying NLP methods. The effectiveness of employing the verbose responses is demonstrated by the baseline approach, where the results of the single ChatGPT modalities are close to the baseline. The verbose responses (compared to the non-verbose ChatGPT baseline) lead to better responses for both sentiment analysis and personality assessment, but with some drop in suicide detection. The verbose responses have the additional advantage of avoiding the problem of parsing the label from the response of ChatGPT, since the responses (including the non-verbose) do not always follow the same format despite being prompted to <cit.>. The last obvious advantage of verbose responses is the ability to include them in fusion models in various ways, which can lead to a much better performance as discussed earlier. In summary, utilising the verbose responses of ChatGPT adds unique information that can yield improvements to the results of existing NLP models. Employing fusion techniques on top of that, whether early or late fusion, will yield bigger improvements. Moreover, it is sufficient in most cases to process the texts only with RoBERTa to extract features, for both the original text and the verbose response of ChatGPT without worrying about parsing the corresponding labels. Conclusion In this work, we explored the fusion capabilities of ChatGPT with traditional Natural Language Processing (NLP) models in affective computing problems. We first prompted ChatGPT to give verbose responses to answer binary classification questions for three affective computing downstream tasks, namely sentiment analysis, suicide tendency detection, and big-five personality traits assessment. Additionally, we processed the input texts and the corresponding ChatGPT responses with two NLP techniques, namely fine-tuning RoBERTa language model and n-gram BoW; these features were trained by leveraging Multi-Layer Perceptrons (MLPs). Furthermore, we investigated two fusion methods, early fusion (on the features level) or late fusion (on the prediction level). The experiments have demonstrated that leveraging ChatGPT verbose responses bears novel knowledge in affective computing and probably beyond, which should be evaluated next, that can aid existing NLP techniques by ways of fusion, whether early or late fusion. First, we demonstrated the benefit of using verbose responses while processing them with NLP techniques, as compared to parsing classification labels from the non-verbose labels. Subsequently, this provided the possibility of seamlessly fusing ChatGPT responses with existing NLP methods, hence achieving a better performance via both early or late fusions. Furthermore, the experiments have demonstrated that utilising only RoBERTa to process and fuse the input texts and ChatGPT responses (with an inclination to early fusion than late) can be sufficient to reach the best performance. acsa Mostafa M. Amin is currently working toward the Ph.D. degree with the Chair of Embedded Intelligence for Health Care and Wellbeing with University of Augsburg, while working as Senior Research Data Scientist at SyncPilot GmbH in Augsburg, Germany. His research interests include Affective Computing, Audio and Text Analytics. He received a M.Sc. degree in Computer Science from the University of Freiburg, Germany. Contact him at mailto:first.author@institution.edumostafa.mohamed@uni-a.de Erik Cambria is a professor of Computer Science and Engineering at Nanyang Technological University, Singapore. His research focuses on neurosymbolic AI for explainable natural language processing in domains like sentiment analysis, dialogue systems, and financial forecasting. He is an IEEE Fellow and a recipient of several awards, e. g., IEEE Outstanding Career Award, was listed among the AI's 10 to Watch, and was featured in Forbes as one of the 5 People Building Our AI Future. Contact him at mailto:cambria@ntu.edu.sgcambria@ntu.edu.sg. Björn W. Schuller is currently a professor of Artificial Intelligence with the Department of Computing, Imperial College London, UK, where he heads the Group on Language, Audio, & Music (GLAM). He is also a full professor and the head of the Chair of Embedded Intelligence for Health Care and Wellbeing with the University of Augsburg, Germany, and the Founding CEO/CSO of audEERING. He is an IEEE Fellow alongside other Fellowships. Contact him at mailto:cambria@ntu.edu.sgschuller@IEEE.org.
http://arxiv.org/abs/2307.00192v1
20230701020151
Generation of intense cylindrical vector beams by Faraday effect in plasma
[ "Wei Liu", "Qing Jia", "Jian Zheng" ]
physics.plasm-ph
[ "physics.plasm-ph" ]
Department of Plasma Physics and Fusion Engineering, University of Science and Technology of China, Hefei, Anhui 230026, People's Republic of China qjia@ustc.edu.cn Department of Plasma Physics and Fusion Engineering, University of Science and Technology of China, Hefei, Anhui 230026, People's Republic of China Department of Plasma Physics and Fusion Engineering, University of Science and Technology of China, Hefei, Anhui 230026, People's Republic of China Collaborative Innovation Center of IFSA, Shanghai Jiao Tong University, Shanghai 200240, People's Republic of China Cylindrical vector (CV) beams, whose polarizations are cylindrically symmetric, have recently been widely applied in high energy density physics such as electron acceleration and intense spatiotemporal optical vortices generation. Thermal-damage-resistant plasma optics are expected to generate intense CV beams. In this work, based on the Faraday effect, we propose a method that can directly convert an intense linearly/circularly polarized Gaussian beam into a CV/vortex beam by setting up an azimuthally distributed axial magnetic field in the plasma. Three-dimensional particle-in-cell simulations demonstrate good conversion efficiency, which offers a new degree of freedom for manipulating high-power laser pulses and paves the way for further studies on ultra-strong vector beams. In addition, our work reveals a new possible source of photon orbital angular momentum related to magnetized plasma in astrophysics and space physics. Generation of intense cylindrical vector beams by Faraday effect in plasma Jian Zheng August 1, 2023 ========================================================================== Polarization is one of the fundamental properties of electromagnetic wave. In laser plasma physics, there are many processes that are closely related to laser polarization, such as laser resonance absorption<cit.>, axial self-generated magnetic field<cit.>, high harmonic generation<cit.> and electron acceleration<cit.>. In these works, electromagnetic waves are typically considered linearly or circularly polarized, whose polarizations are uniformly distributed in the transverse plane. Exotic laser beams with transversely non-uniformly distributed polarization, namely, vector beams<cit.>, have attracted much attention in the past two decades. Cylindrical vector (CV) beams<cit.> are a special class of vector beams whose polarizations are cylindrically distributed. Representative radially polarized (RP) and azimuthally polarized (AP) electromagnetic waves have been studied extensively mainly due to their unique focusing characteristics<cit.>. It has been proven that RP beam can be focused more tightly than linearly polarized beam. Near the focal plane, the tightly focused RP(AP) beam will result in strong longitudinal electric (magnetic) fields near the optical axis. Weak CV beams have been employed in fields such as super-resolution imaging<cit.> and optical trapping<cit.>. Recently, intense CV beams have been of interest in laser plasma physics and have brought some new effects. Due to the strong longitudinal electric fields, RP beam is applied in the direct laser acceleration of electrons<cit.>. In particular, when using a plasma mirror as an electron injector, electron bunches with MeV energies and hundreds of pC charges are obtained<cit.>. In addition, it demonstrates that intense spatiotemporal optical vortices<cit.> can be generated by intense RP beams obliquely reflected from a solid-plasma surface<cit.>. However, as far as we know, the generation of intense CV beams remains challenging. Conventional methods for generating intense CV beams<cit.> might be restricted due to the optical thermal threshold. Plasma, benefiting from sustaining very high light intensity, has recently been widely used to manipulate intense lights, such as plasma mirrors to improve the temporal contrast of femtosecond pulses<cit.>, plasma-based options to generate vortex beam<cit.>, holographic plasma lenses to focus intense lasers<cit.>, plasma photonic crystals<cit.> and so on. In this study, utilizing the Faraday effect<cit.>, we propose a straightforward and effective plasma-based approach for converting an intense linearly/circularly polarized Gaussian beam into a CV/vortex beam. The investigation of plasma-based generation of vector beam and vortex beam in the laboratory also holds significant importance for observations in astrophysics and space physics. Currently, our understanding of the universe is primarily derived from observations of electromagnetic waves. It is crucial to extract as much information as possible from the collected electromagnetic waves, given the scarcity of data and the high cost. Recently, the photon orbital angular momentum (POAM)<cit.>, which is linked to the vortex beam, is increasingly attracting attention as a new degree of freedom in the field of astronomy<cit.>. In 2003, Harwit outlined several potential sources of astrophysical POAM<cit.> from radiation emitted by luminous pulsars and quasars to the cosmic microwave background radiation. Especially, a recent study<cit.> has confirmed the presence of POAM generated by the gravitational effect near rotating black holes, as previously proposed by <cit.>. And this research demonstrates that POAM can be directly employed to measure the rotation parameters of a black hole. It is worth noting that plasmas found in the universe and celestial bodies are often magnetized, such as the plasmas in magnetic reconnection process<cit.>. We are curious whether POAM can be generated after electromagnetic waves passing through these magnetized plasmas and whether they can be used for relevant observations. The difference in dispersion between left- and right-hand circularly polarized electromagnetic waves in axially magnetized plasma is a well-established phenomenon that gives rise to distinct phase and group velocities<cit.>. Taking into account the difference in group velocity, simulations have shown that a linearly polarized laser pulse passing through an axially magnetized plasma can split into two laser pulses with opposite circular polarizations, provided the axial magnetic field is strong enough<cit.>. Considering the differences in phase velocities, the polarization of a linearly polarized laser undergoes rotation after passing through an axially magnetized plasma, which is referred to as the Faraday effect<cit.>. We will now provide a brief overview of the Faraday effect and offer a brief introduction to the generation of CV beam based on it. The propagation of left- and right-hand circularly polarized electromagnetic waves along a weakly magnetic field (|ω_ce/ω|≪ 1) in a plasma is associated with different dispersions, which can be expressed<cit.> N_L/R=kc/ω≈√(1-n_e/n_c)±1/21/√(1-n_e/n_c)n_e/n_cω_ce/ω, where N_L/R is the refractive index for left/right-hand circularly polarized electromagnetic wave, k is the laser wavenumber in magnetized plasma, c is the speed of light in vacuum, ω is the laser frequency, n_e is the electron number density, n_c=ε_0m_eω^2/e^2 is the critical number density, m_e is the electron mass, ω_ce=eB_x/m_e is the electron cyclotron frequency, e is the elementary charge, and B_x is the axial magnetic field. The incident linearly polarized electromagnetic wave can be represented as E_in=[(e_y-ie_z)+(e_y+ie_z)]E_0exp[i(k_0x-ω t ) ], where σ_x=± 1 in e_y+σ_xie_z represent right/left-hand circularly polarized and k_0 is the laser wavenumber in vacuum. After propagating distance L, the output wave can be written as E_out=2E_0(cosφe_y + sinφe_z)exp ( i Φ -iω t ), where φ=k_0L(N_L-N_R)/2, Φ=k_0L(N_L+N_R)/2=k_0L√(1-n_e/n_c). The output wave is still linearly polarized but with polarization rotation angle φ≈1/21/√(1-n_e/n_c)n_e/n_cω_ce/ωk_0L. The Faraday rotation angle is directly linked to plasma density, plasma length, and the strength of the axial magnetic field. In most cases, researchers focus on studying a plasma with uniform magnetization in the transverse plane, which ensures that the output beam remains linearly polarized. However, this uniformity restricts the degree of polarization manipulation that can be achieved. In cases where the magnetization in the transverse plane is non-uniform, the incident linearly polarized beam undergoes a transformation into a vector beam. By skillfully designing the distribution of the magnetic field, it becomes possible to generate a vector beam with desired characteristics, such as a CV beam. The simplified distribution of a RP beam at the waist can be written as<cit.> E=E_0( r/w_0)exp( -r^2/w_0)exp[i(k_0x-ω t )]e_r, where w_0 is the beam waist, r=√(y^2+z^2) and θ =arctan( z/y). A significant difference between a RP beam and a linearly polarized beam is that the polarization of a RP beam varies across different spatial locations, but all along the radial direction. In order to convert linearly polarized beam into RP beam, it is necessary to introduce spatially varying Faraday rotation angles φ(θ). We have discovered that by setting the Faraday rotation angle in Eq. (<ref>) as φ( r,θ)=θ, the incident laser beam with linear polarization in the y direction can be converted into RP beam, as illustrated in Fig. <ref>. By altering the polarization direction of the incident beam, other CV beams like AP beam can be obtained in principle. Furthermore, in accordance with Eq. (<ref>), the Faraday rotation not only induces the rotation of laser polarization but also introduces an additional phase Φ. In order to prevent the introduction of a spiral phase exp(iθ) for CV beams<cit.>, the phase Φ should be independent of azimuth, which implies that φ( r,θ)=θ can be achieved by setting an azimuthally distributed axial magnetic field B_x=B_extθ with B_ext=2m_eωn_c√(1-n_e/n_c)/ en_ek_0L. This magnetic field distribution, depicted in Fig. <ref>, can be generated approximately by a semi-infinite plane current. Further details and a comprehensive discussion on this topic will be provided in Sec. <ref>. To verify the above scheme, three-dimensional (3D) particle-in-cell (PIC) simulations are conducted with the code Smilei<cit.>. A linearly polarized Gaussian beam with wavelength λ =1μ m (in vacuum) and radius of waist w_0=10μ m normally incidents on a plasma located at 15μ m<x<65μ m. The laser intensity rises linearly over 10 laser periods to its maximum intensity I_0=1.37×10^14W/cm^2 (corresponding to a_0=eE_0/m_eω c =0.01) and then remains constant. The simulation box is 80μ m(x)× 50μ m(y)× 50μ m(z) with 1280× 400× 400 cells. The cell sizes are dx =λ/16, dy=dz=λ/8. Each cell has applied 16 particles for electrons, and the ions are set to be immobile. The electrons are uniformly distributed with number density n_e=6.27×10^26 m^-3, corresponding to 0.56n_c. The axial magnetic field distribution is set as Eq. (<ref>) with B_ext=80.17 T, which satisfies | ω_ce/ω|≪ 1. It should be noted that in the simulations, such magnetic field only participates in pushing the particles but does not participate in the Maxwell solver<cit.>, which can be regarded as the steady magnetic field generated by an external current or kinds of magnets and is physically reasonable. Figure <ref> presents the laser amplitude and electric field vector distributions when lasers exiting the magnetized plasma for different linearly polarized incident Gaussian beams. Figure <ref>(a) displays the result of a y-polarized Gaussian beam passing through the plasma. The laser intensity is modulated to have a null intensity in the center, which is a key feature of the CV beam. Besides, judging from the distribution of the laser electric field vectors represented by the small white arrows, the linearly polarized beam is successfully converted into a RP beam. Similarly, the result of the transformation of a z-polarized Gaussian beam into an AP beam is shown in Fig. <ref>(b). In general, an arbitrary CV beam can be generated by a reasonable linear superposition of y-polarized and z-polarized Gaussian beams, as indicated in Fig. <ref>(c). Note that in Fig. <ref>, the intensity of the laser is significantly modulated near the line of z=0 (y<0). In our theoretical scheme, the axial magnetized plasma is regarded as an optical medium with the dispersion relation shown in Eq. (<ref>), which requires the scale length of the electron motion (L_ele∝a_0λ when a_0<1) to be much smaller than the scale length of the magnetic field (L_mag∼| rB_x/(∂B_x/∂θ)|). This condition is satisfied in most areas except the region near the line of z=0 (y<0), where a large gradient in the magnetic field exists as revealed in Fig. <ref>. This large magnetic gradient leads to non-typical motions of electrons, which significantly modulates the laser intensity as shown in the PIC simulations. However, these mode imperfections will gradually decrease as the laser exits the plasma and propagates in vacuum. Figures <ref>(a)-(c) present the distributions of electric field vectors and amplitude when the generated RP beam exits the plasma and propagates in vacuum after 50μ m, 100μ m and 200μ m, respectively. These results are calculated from the electric field given by the PIC simulation corresponding to Fig. <ref>(b) through the angular spectrum method<cit.>. In comparison with Fig. <ref>(b), it hints that the intensity modulation initially near the line of z=0 (y<0) gradually decreases. This may be due to the stronger diffraction of the above higher-order-mode imperfections compared with the main component of the RP beam. These results further illustrate the practicability of our scheme in generating CV beams. The above magnetized plasma can also be utilized for the generation of vortex beam<cit.>. When the incident beam is circularly polarized, according to Eq. (<ref>), the laser exiting from the B_x=B_extθ magnetized plasma can be written as E_out=( e_y+σ_xie_z)E_0e^ i( k_0L√(1-n_e/n_c) -ω t -ησ_xθ), where η = ek_0Ln_eB_ext/(2m_eωn_c√(1-n_e/n_c)). The output beam is still circularly polarized but with a spiral phase exp ( -iησ_xθ ), which indicates that the exiting laser has been converted into a vortex beam with azimuthal index l=-ησ_x<cit.>. Figure <ref>(a) presents the electric field E_z distribution obtained by PIC simulation when a right-hand circularly polarized (σ_x=1) laser exits the above magnetized plasma (η=1). The two-flap distribution indicates that the existing laser is mainly dominated by the vortex mode with |l|=1. By performing the Laguerre-Gaussian (LG) mode decomposition<cit.>, the output laser can be predominantly described as √(0.04)e^2.28iLG_0,0+√(0.66)e^1.41iLG_0,-1+√(0.09)e^0.77iLG_1,-1. Here, √(a_p,l)e^iϕLG_p,l refers to the LG mode with an azimuthal index l and a radial index p. It is noteworthy that the waist radius of the LG modes is selected to match that of the incident Gaussian mode. This mode decomposition highlights that an incident Gaussian beam can be effectively converted into a vortex beam with topological charge of l=-1. Increasing the parameter η allows for the generation of vortex beams with higher values of |l|. Figure <ref>(b) displays the distribution of the electric field E_z when a right-hand circularly polarized (σ_x=1) beam exits the magnetized plasma with η=2. By making the mode decomposition, it is found that the existing laser is dominated by the LG mode with l=-2. In the above PIC simulations, a relatively strong magnetic field B_ext≈80 T and high plasma density n_e=6.27×10^26 m^-3 are applied to reduce the simulation cost. Indeed, the requirement of the strength of the magnetic field and plasma density can be significantly reduced. Figure <ref> shows the dependence of the normalized plasma length L/λ on the normalized plasma density n_e/n_c and the strength of the normalized magnetic field B_ext/B_0. This indicates that a longer plasma length is beneficial for reducing the magnetic field strength as well as the plasma density. For lasers with λ =1μ m, if the plasma length is chosen as L=2 c m and the plasma density is n_e=4.46×10^25 m^-3, the required magnetic field is only approximately 4 T and is easily achieved experimentally. In our approach, the generation of an axial magnetic field with B_x=B_extθ is a crucial aspect. We consider a magnetic field that solely possesses an axial component B_x, without any azimuthal (B_θ) or radial (B_r) components. It can be readily verified that this magnetic field satisfies the divergence equation ∇·𝐁=0. To fulfill the curl equation ∇×𝐁=μ_0𝐉, it is necessary to introduce a current 𝐉=B_ext∑_n=1^∞na_ncos (nθ )/(μ_0r) 𝐞_𝐫, where a_n=∫_-π^πθsin ( nθ )dθ /π=-2(-1)^n/n. This current can be regarded as the source term responsible for the generation of the B_x=B_extθ axial magnetic field. Figure <ref>(a) displays the distribution of this current (in radial direction) when n is taken until 40. It indicates that this current is mainly distributed near the line of z=0 (y<0). However, it is worth noting that the current tends to approach infinity as r approaches 0, making it practically impossible to obtain experimentally. To overcome this limitation, we propose employing a semi-infinite flat plate current that addresses the characteristics of the current. This current can be represented as J_y/J_0=πλB_ext/B_0L_zcosh^2(z/L_z)[1+exp(y/λ)], where J_0=n_cec, B_0=m_eω /e, L_z denotes the characteristic thickness of this current in z direction. The distribution of this current when L_z=λ is presented in Fig. <ref>(b). Figure <ref>(c) compares the magnetic field generated by this planar current at r=10λ with the magnetic field B_x=B_extθ. It is evident that the two magnetic field lines overlap significantly. We employed this magnetic field into the PIC simulation to examine the transformation of a circularly polarized Gaussian beam into a vortex beam. By performing the LG decomposition on the output beam, we obtained √(0.07)e^1.81iLG_0,0+√(0.63)e^1.34iLG_0,-1+√(0.08)e^0.7iLG_1,-1, which are nearly identical to those obtained previously. It is worth noting the similarity between the current described in Eq. (<ref>) and the Harris current in the magnetic reconnection process<cit.>, especially in cases involving with the spatially confined X-line in the current direction<cit.>. This resemblance implies that the passage of an electromagnetic wave through a magnetic reconnection region can potentially result in the generation of POAM and carry important information along its path. Considering the magnetic reconnection in solar flares, typical parameters are magnetic field strength B_ext∼0.01 T, plasma density n_e∼3×10^15 m^-3 and plasma length L∼10^8m<cit.>. According to Eq. (<ref>), for electromagnetic waves with wavelengths greater than 20 μ m, it is possible to be detected as carrying POAM. This result can be confirmed in principle. In addition, the magnetic reconnection in solar flares can now be studied in the laboratory<cit.> based on the scaling laws<cit.>. Commonly used parameters in the investigation of magnetic field reconnection based on laser plasma processes are B_ext∼100 T and n_e∼5×10^25 m^-3. For laser with λ =800 n m, the plasma length required to produce significant POAM is about 1 mm, which is experimentally achievable. The POAM carried by the outgoing electromagnetic waves holds potential for diagnosing and observing relevant physical quantities. In the case of magnetic reconnection with a spatially confined X-line extent<cit.>, the magnetic fields at the two ends of the current sheet undergo opposite azimuthal changes, leading to the generation of opposing POAMs. This property can be employed for the diagnosis of the length of X-line. Furthermore, the thickness of the current sheet is a significant aspect in the investigation of magnetic reconnection. We conducted a simple study on the conversion of circularly polarized beam passing through magnetized plasma with varying current sheet thicknesses L_z. Figure <ref> displays the dependence of LG_0,-1 mode energy occupancy a_0,-1 on L_z, as determined through PIC simulations and a simplified theory. In the PIC simulations, the parameters remain consistent with the previous setup, except for adjusting the current sheet thickness L_z. The simplified theory results are obtained by directly incorporating the phase induced by the magnetized plasma into the incident Gaussian beam, followed by the LG mode decomposition. It is evident that the percentage of LG_0,-1 decreases as L_z increases. These findings suggest the potential use of POAM in diagnosing<cit.> the thickness of the current sheet. However, it must be noted that the results are also influenced by the plasma length and the beam waist of the incident beam, which require further investigation. Besides the magnetic reconnection process, there are other physical processes or objects in the universe that have magnetic fields near them. For example, the magnetic field near a magnetar can be as strong as 10^8 T, and whether POAM can be obtained when electromagnetic waves pass through this region is worth further investigation. In conclusion, a scheme converting the incident intense linearly polarized Gaussian beam into a cylindrical vector beam is proposed, which is achieved by setting up an axial magnetic field with azimuthal distribution B_x=B_extθ. Three-dimensional particle-in-cell simulation results verify the feasibility and efficiency of this scheme. Besides, when considering circularly polarized Gaussian beam passing through the proposed magnetized plasma, vortex beam with | l |=1 can be generated. By increasing the strength of the magnetic field B_ext (or plasma length L, electron number density n_e), arbitrarily higher | l | mode vortex beams can also be generated. This scheme is free of the optical thermal threshold and is applicable for intense lasers with powers as high as ∼PW in some laser facilities. Our investigation also reveals a promising and novel source of photon orbital angular momentum in astrophysics and space physics. Specifically, we have demonstrated that orbital angular momentum can be acquired by an electromagnetic wave passing through a magnetic reconnection region. This photon orbital angular momentum holds the potential for diagnosing the length of the X-line and the thickness of the current sheet in this process. Further research is required to explore and understand these possibilities in greater depth. This research was supported by the National Natural Science Foundation of China (NSFC) under Grant No. 11975014, by the Strategic Priority Research Program of Chinese Academy of Sciences, Grant Nos. XDA25050400 and XDA25010200. The authors would like to thank and acknowledge Q. Lu and K. Huang for helpful discussions. *
http://arxiv.org/abs/2307.01749v1
20230704143940
A numerical method for wave-structure interactions in the Boussinesq regime
[ "Geoffrey Beck", "David Lannes", "Lisl Weynans" ]
math.NA
[ "math.NA", "cs.NA", "math.AP", "35G61, 35Q35, 74F10, 65M08" ]
Geoffrey Beck, David Lannes, Lisl WeynansA numerical method for wave-structure interactions in the Boussinesq regime SRCD: Semantic Reasoning with Compound Domains for Single-Domain Generalized Object Detection Zhijie Rao, Jingcai Guo, Member, IEEE, Luyao Tang, Yue Huang, Xinghao Ding, and Song Guo, Fellow, IEEE Z. Rao, L. Tang, Y. Huang, and X. Ding are with the School of Information Science and Engineering, Xiamen University, Xiamen 361005, China (e-mail: raozhijie@stu.xmu.edu.cn; lytang@stu.xmu.edu.cn; huangyue05@gmail.com; dxh@xmu.edu.cn). J. Guo and S. Guo are with Department of Computing, The Hong Kong Polytechnic University, Hong Kong SAR., China, and with The Hong Kong Polytechnic University Shenzhen Research Institute, Shenzhen 518057, China (e-mail: jc-jingcai.guo@polyu.edu.hk; song.guo@polyu.edu.hk). Corresponding authors: Jingcai Guo and Yue Huang. August 1, 2023 ====================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== Geoffrey Beck, David Lannes, Lisl Weynans The goal of this work is to study waves interacting with partially immersed objects allowed to move freely in the vertical direction, and in a regime in which the propagation of the waves is described by the one dimensional Boussinesq-Abbott system. The problem can be reduced to a transmission problem for this Boussinesq system, in which the transmission conditions between the components of the domain at the left and at the right of the object are determined through the resolution of coupled forced ODEs in time satisfied by the vertical displacement of the object and the average discharge in the portion of the fluid located under the object. We propose a new extended formulation in which these ODEs are complemented by two other forced ODEs satisfied by the trace of the surface elevation at the contact points. The interest of this new extended formulation is that the forcing terms are easy to compute numerically and that the surface elevation at the contact points is furnished for free. Based on this formulation, we propose a second order scheme that involves a generalization of the MacCormack scheme with nonlocal flux and a source term, which is coupled to a second order Heun scheme for the ODEs. In order to validate this scheme, several explicit solutions for this wave-structure interaction problem are derived and can serve as benchmark for future codes. As a byproduct, our method provides a second order scheme for the generation of waves at the entrance of the numerical domain for the Boussinesq-Abbott system. Mathematics Subject Classification: 35G61, 35Q35, 74F10, 65M08 Keywords: Wave-structure interactions, Initial boundary value problems, Boussinesq system, Numerical analysis § INTRODUCTION §.§ Presentation of the problem While the first studies of the interactions of waves with floating structures go back at least to John's paper <cit.>, or to the phenomenological integro-differential equation derived by Cummins to describe the linear motion of floating structures <cit.>, this research field became increasingly active in recent years. A first reason for this renewed interest is related to the development of renewable marine energies as one of the tools for energy transition. Indeed, several devices of offshore wind-turbines and wave-energy convertors involve partially immersed structures <cit.>. A second reason for the recent mathematical activity on wave-structure interactions is that this has been made technically feasible thanks to the recent progresses on the mathematical understanding of the propagation of water waves. The initial value problem in domains without boundaries (ℝ^d or 𝕋^d) is now well understood for the full water waves (also called free-surface Euler) equations, as well as for asymptotic models in shallow water (such as the nonlinear shallow water equations, the Boussinesq systems, the Serre-Green-Naghdi equations). Recently, the initial value problem has also been studied in domains with a boundary. When the fluid domain is delimited by vertical sidewalls, the water waves equations have been studied in <cit.>; in the case of non-vertical sidewalls, this problem has been considered in <cit.> for the water waves equations, and in <cit.> for the shallow water and Green-Naghdi equations. The initial boundary value problem, in which one imposes initial and boundary datas, has also been investigated for the Boussinesq equations <cit.>. These advances make it more realistic to address the issues raised by the presence of a partially immersed object. From the numerical point of view, efficient numerical codes based on shallow water models have been developed recently and can be used to address realistic submersion issues (see for instance <cit.>); here also, it is now a reasonable prospect to address the specific difficulties raised by wave-structure interactions. The present paper is a contribution to the theoretical and numerical understanding of these interactions, inasmuch as it provides a precise description of the motion of a partially immersed object allowed to move freely in the vertical direction under the action of waves described by a nonlinear dispersive model (the standard Boussinesq-Abbott system), see Figure <ref>. It has to be considered as a partial (affirmative) answer to the wider question: can the efficient modelling of waves based on shallow water models be extended to allow the presence of floating structures? If this happens to be true, the gain in computational time would allow to investigate the behavior of many floating structures (the so-called farms of wave-energy convertors or offshore wind turbines), as well as their impact on the wave fields, which can have significant consequences in coastal regions. Answering such question is out of reach for CFD methods that can be used to describe the behavior of one wave-energy convertor, and also, to a lower extent, for potential methods (see for instance <cit.>). On the other hand, the linear methods based on Cummins' equation used in commercial softwares such as Wamit neglect the nonlinear effects that can be important <cit.>, especially in shallow water, and are unable to provide a precise description of the impact of a wave-farm on the wave-field. The presence of a floating structure in a shallow water model can be taken into account following the approach proposed in <cit.> where the horizontal plane is decomposed into two regions: the interior region (below the floating object), and the exterior region (below the free surface waves). In the exterior regions, the standard (depth integrated) shallow water model is used, while in the interior region, an additional pressure term is present. This pressure term corresponds to the pressure exerted by the fluid on the object (and which eventually makes it move through Newton's equations), and can be understood as the Lagrange multiplier associated with the constraint that, under the object, the surface elevation of the waves is constrained as it must by definition coincide with the bottom of the object. It is possible to relax this constraint by approximating the pressure term by a pseudo-compressible relaxation; one can then use the same kind of asymptotic preserving schemes as for the low-Mach limit in compressible gases. This approach has been used in the present context in <cit.>, and is also relevant for other instances of partially congested flows <cit.>. In this paper, we rather consider the original (non relaxed) problem, which requires to understand precisely the coupling between the interior and exterior regions. It turns out that this wave-structure interaction problem can be reduced to an initial boundary value problem for the wave model in the exterior region, with non standard boundary (or transmission) conditions. In the case where the horizontal dimension is d=1, the object has vertical walls located at x=±ℓ as in Figure <ref> and is only allowed to move vertically, and if the wave model is given by the nonlinear shallow water equations, it was shown in <cit.> that this transmission problem takes the form (in dimensionless variables, see Section <ref> for details), ζ +  q=0, q + (1/hq^2)+hζ=0, (-∞,-ℓ)∪ (ℓ,+∞), where ζ is the elevation of the surface and q the horizontal discharge. Using the notation q=q(ℓ)-q(-ℓ) and q=1/2(q(-ℓ)+q(ℓ)), the transmission conditions are given by q=-2ℓδ̇ q=q_ i where the function δ and q_ i (representing respectively the vertical displacement of the object and the mean discharge under the object) solve an ODE of the form d/dtθ=F(θ,ζ_|_x=-ℓ, ζ_|_x=ℓ), with θ=(δ,δ̇,q_ i) and F a smooth function of no importance at this stage of the discussion. The coupling acts in two ways: it is necessary to known θ to solve the transmission problem for (ζ,q), and it is necessary to know the solution (ζ,q) to determine the forcing term F(θ,ζ_|_x=-ℓ, ζ_|_x=ℓ) in the ODE for θ. A key point in the mathematical analysis of this problem is the regularity of the traces ζ_|_x=±ℓ. Such a control is furnished by the construction of a Kreiss symmetrizer, as shown in <cit.> where general initial boundary value problems, possibly with a free boundary, are considered for a wide class of hyperbolic systems; they include the above transmission problem as well as the more complex free boundary problem one has to deal with when the lateral boundaries of the object are not vertical. Numerically, the evaluation of the traces ζ_|_x=±ℓ also requires a careful treatment which relies on the Riemann invariants associated with the nonlinear shallow water equations <cit.>; we also refer to <cit.> for a higher order scheme, to <cit.> where a wave-energy device is simulated using this approach (the oscillating water column), and to <cit.> where controlability issues were also addressed. The more complex case of an object freely floating and with non-vertical walls (and therefore nontrivial dynamics for the contact points) has been solved theoretically in <cit.>, and numerically in <cit.> using ALE methods to treat the evolution of the contact points. A variant of the above wave-structure interaction problem for the viscous nonlinear shallow water equations was also considered in <cit.> and an extension to the case of horizontal dimension d=2 with radial symmetry has been considered theoretically in <cit.> and the so-called decay test (or return to equilibrium) investigated in the same configuration under an additional assumption of linearity in <cit.>. Let us also mention <cit.> where the dynamics of trapped air pockets are studied. We propose here an extension in another direction. The principal drawback of the nonlinear shallow water equations is that they neglect the dispersive effects that play an important role is some important situations (they allow for instance the existence of solitary waves). The most simple models that generalize the nonlinear shallow water equations by adding dispersive terms are the Boussinesq equations (see <cit.> for a recent review on shallow water models). Replacing the nonlinear shallow water equations by the so called Boussinesq-Abbott system in the above example, one obtains the following transmission problem (see Section <ref> for more details), ζ +  q=0, (1-κ^2^2) q + (1/hq^2)+hζ=0, (-∞,-ℓ)∪ (ℓ,+∞), and with transmission conditions q=-2ℓδ̇ q=q_ i where the function δ and q_ i (representing respectively the vertical displacement of the object and the mean discharge under the object) solve an ODE of the form d/dtθ=F_κ(θ,ζ_|_x=-ℓ, ζ_|_x=ℓ, κ^2 (^2 ζ)_|_x=-ℓ,κ^2 (^2 ζ)_|_x=ℓ ), with θ=(q_ i,δ̇, δ)^ T and F_κ a smooth function of no importance at this stage of the discussion. The differences with the nondispersive case considered above are the operator (1-κ^2^2) applied in front of q in the evolution equations and a contribution of the trace of ^2ζ to the forcing term in the ODE for θ. Formally, the nondispersive case is obtained by setting κ=0, but the mathematical and numerical differences are considerable. Contrary to the hyperbolic case mentioned above, there is no general theory for initial boundary value problems associated with nonlinear dispersive systems and this is why several approximations have been used to bypass this issue. In <cit.>, wave-structure interactions using a Boussinesq model was used, but the issue at the boundary was avoided by using the (dispersionless) nonlinear shallow water equations in a small region around the object; in <cit.> the behavior at the boundary was approximated at second order using Bessel expansions and matched asymptotics; in <cit.>, Boussinesq type equations where computed in the whole domain, neglecting the singularities of the surface elevation and of the discharge at the contact line, while the presence of the object is taken into account by adding an additional pressure term in the interior region. There are also approximate methods based on sponge layers and artificial source terms which are often used to generate waves at the entrance of the numerical domain <cit.>. Such methods are far too rough to be used in the present case, where a precise description of the waves at the contact points is needed; indeed, as shown in <cit.>, the behavior at the contact points can be quite complex and exhibit dispersive boundary layers. This is why a new method to handle non-homogeneous initial boundary value problems for the Boussinesq equations was proposed in <cit.> and numerically implemented with an order 1 scheme. The approach used in the present paper allows us to treat the issues related to the initial boundary value problem for Boussinesq-type equations without any approximation; a byproduct of independent interest of the present paper is that it furnishes a second order method for the generation of waves at the numerical boundary of the fluid domain for the Boussinesq equations, hereby complementing the first order generation scheme of <cit.>. As for the hyperbolic case discussed previously, the control of the traces of ζ (and a fortiori of ^2ζ) is a key ingredient of the analysis, both from the PDE and numerical perspectives; however, due to the presence of dispersion, there are no such things as a Kreiss symmetrizer or Riemann invariants to help us. To solve these issues, we propose in this paper an extended formulation of the equations. After remarking that the traces ζ_|_x=±ℓ solve a second order forced ODE, we introduce new unknowns _± defined as the solutions of this ODE. This allows us to replace ζ_|_x=±ℓ by _± in the forcing term F_κ in the ODE for θ, hereby avoiding the computation of the traces. The resulting extended formulation just consists in replacing the above ODE for θ by the higher dimensional ODE d/dtΘ=𝒢(Θ,(R_1 𝔣_ sw)_|_x=±ℓ) with Θ=(q_ i,δ̇,_̇+̇,_̇-̇,δ,_+,_-)^ T and 𝒢 is a smooth mapping. This ODE is forced by the terms (R_1 𝔣_ sw)_|_x=±ℓ which depend on the solution (ζ,q) of the Boussinesq equations in the fluid domain; the precise meaning of this term will be given in Section <ref>, the important thing being that the control of the trace of the quantities R_1 𝔣_ sw at x=±ℓ does not raise any theoretical nor numerical difficulty. The second step of our approach consists in transforming this extended transmission problem into an initial value problem coupled with forced ODEs: this means that we do no longer have to bother with the boundary conditions (which are automatically propagated by the flow). This new formulation can be written as a system of conservation laws with nonlocal flux and an exponentially localized source term, U+ (𝔉_κ(U))=𝒮_±(Θ,(R_1 𝔣_ sw)_|_x=±ℓ)𝔟(x∓ℓ) ± (ℓ,∞), where 𝔉_κ denotes the nonlocal flux, while 𝒮_± is a smooth function of its arguments and 𝔟 an exponentially localized function. The above ODE for Θ allows one to compute the source term in this system of nonlocal conservation laws; conversely, the resolution of this system allows one to compute the forcing terms (R_1 𝔣_ sw)_|_x=±ℓ in the ODE for Θ: the coupling acts therefore both ways. One of the advantages of this new formulation is that we can implement on it a second order scheme that couples a MacCormack predictor corrector scheme (generalized to handle nonlocal fluxes and a source term) for the computation of the waves, and a second order Heun scheme for the computation of the forced ODEs. We also exhibit several exact explicit solutions that we use to study the convergence of our code and that are of independent interest. §.§ Organization of the paper In Section <ref>, we derive the formulation of the problem our numerical scheme is based on: we first recall in <ref> the reduction of <cit.> to a transmission problem for the Boussinesq equation on the two connected components of the exterior region, and then show in <ref> that the traces of the surface elevation at the contact points satisfy a forced second order ODE that we use to write the new augmented formulation of the transmission problem in <ref>; this transmission problem is finally rewritten as an initial boundary value problem in <ref>. The numerical schemes are presented in Section <ref>. The initial value problem obtained in the previous section is a set of two conservation equations with nonlocal flux and an exponentially decaying source term whose coefficient is found by solving a set of forced second order ODEs. We propose two numerical schemes based on an abstract formulation of these equations. The first one, described in <ref>, is of first order and is an adaptation of the Lax-Friedrichs scheme to the present context. The second one, studied in <ref>, is of second order. It is based on the MacCormack predictor-corrector scheme for the two conservation PDEs (with adaptations to handle the nonlocal flux and the source term), and on a Heun scheme for the ODE part. Numerical simulation are then presented in Section <ref>. We investigate several configurations exploring different aspects of the coupling between the Boussinesq equations and the forced ODEs used in the transmission conditions. Wave generation is considered in <ref>, and is of independent interest as it provides a way to generate waves at the entrance of the numerical domain for the Boussinesq equations. The return to equilibrium test in which an object oscillates vertically after being released from an out of equilibrium position is studied in <ref>; in the linear case, an explicit solution is exhibited and computed via Laplace transforms, and this solution is used to assess the precision of our scheme. Interactions of waves with a fixed object are then investigated in <ref>; here also, an exact solution is derived in the linear case and used for validation. The most general configuration of waves interacting with an object allowed to move freely in the vertical direction is then considered in <ref>. Throughout this article, we work with an abstract and concise formulation of the equations. The precise equations, with the expressions of the various coefficients involved, is postponed to Appendix <ref>. §.§ Notation - The horizontal axis ℝ is decomposed throughout this paper into an interior region ℐ=(-ℓ,ℓ) and an exterior region ℰ=ℰ_+∪ℰ_- with ℰ_-=(-∞,-ℓ) and ℰ_+= (ℓ,∞), and two contact points x=±ℓ. - For any function f∈ C(), we denote f_±=f_|_x=±ℓ, f =f_+-f_- f=1/2(f_++f_-). - If f∈ C^1([0,T]), we sometimes use the notation ḟ=d/dt f. - We denote by the momentum flux associated with the nonlinear shallow water equations, =( q^2/h+h^2-1/2). § AN AUGMENTED FORMULATION OF THE WAVE-STRUCTURE INTERACTION EQUATIONS The goal of this section is to derive the augmented formulation of the wave-structure equations that we shall use in Section <ref> to propose numerical schemes. We first sketch in <ref> the main steps of the analysis of <cit.> that led to a formulation of the problem as a transmission problem between the two connected components of the fluid domain, and with transmission conditions determined through the resolution of an ODE forced by a source term involving the traces at the contact points of the surface elevation and of their second order time derivative. We then remark in <ref> that these traces solve themselves a second order ODE, but which is forced by a source term which is easier to compute. This observation is the key ingredient that allows us to derive in <ref> an augmented formulation. It has the same structure as the formulation derived in <ref>, namely, it is a transmission problem coupled with a forced ODE. The crucial difference is that this ODE does no longer require the computation of the traces of the surface elevation at the contact points and that it can easily be computed numerically. Finally, we show in <ref> that this augmented transmission problem can be rewritten as an initial value problem, which is the structure the numerical schemes of Section <ref> are based on. §.§ Reduction to a transmission problem coupled with scalar ODEs We remind here the main steps of the derivation of the equations describing the interactions of a partially immersed object with one-dimensional waves in a regime where these waves can correctly be described by the Boussinesq-Abbott equations (see <cit.>); the object is assumed to have vertical sidewalls and can be either fixed, in forced vertical motion, or allowed to float freely in the vertical direction under the action of the waves. In dimensionless variables, the equations involve two coefficients and μ, respectively called nonlinearity and shallowness parameters, and that are defined as =/ μ=(/)^2; in the weakly nonlinear shallow water regime in which the Boussinesq-Abbott equations are known to provide a good approximation of the motion of the waves, one has μ≪ 1 =O(μ); these conditions are assumed throughout this article. For the sake of conciseness, we also introduce the parameter κ as κ=( μ/3)^1/2; this parameter plays an important role as it measures the size of the dispersive boundary layers that appear in the analysis of mixed initial boundary-value problems for the Boussinesq equations, which are a dispersive perturbation of an hyperbolic system <cit.>. As displayed in Figure <ref>, in dimensionless coordinates, the surface of the fluid is parametrized at time t by the function x∈↦ζ(t,x), and the horizontal discharge (the vertical integral of the horizontal component of the velocity field) at time t and position x is denoted q(t,x). We also sometimes denote by h the water depth, h=1+ζ. Finally, we denote by P(t,x) the pressure at the surface of the fluid, namely, P=P(t,x,ζ(t,x)) if P denotes the pressure field in the fluid. Regarding the solid object, we denote by ±ℓ the position of its vertical sidewalls and by ζ_ w the parametrization on (-ℓ,ℓ) of its bottom (the subscript "w" stands for "wetted part"); we also denote by δ(t) the vertical deviation of the object from its equilibrium position, and by h_ eq the water depth at rest. These quantities are related through ζ_ w(t,x)=δ(t)+1/(h_ eq(x)-1 ). N.B. For the sake of simplicity, we assume throughout this article that the center of mass is located at {x=0 } and that h_ eq(x) is an even function. The Boussinesq-Abbott equations for the motion of the waves are given for t>0, x∈ by ζ+ q=0, (1-κ^2^2) q +( 1/hq^2)+hζ=-1/ h P (with h=1+ζ). We now have to distinguish between the exterior region =(-∞,-ℓ)∪ (ℓ,∞) where the surface of the water is in contact with the air and the interior region ℐ=(-ℓ,ℓ) where it is in contact with the object, * In the exterior region , the surface elevation ζ is free, but the surface pressure P is constrained, assumed to be equal to the (constant) atmospheric pressure P_ atm, P(t,x)=P_ atm t>0, x∈; the right-hand side in the second equation of (<ref>) therefore vanishes. * In the interior region , it is the reverse: the surface elevation is constrained because it has to coincide with the bottom of the object, ζ(t,x)=ζ_ w(t,x) t>0, x∈ℐ, but there is no constraint on the surface pressure P which, under the general approach of <cit.>, can be understood as the Lagrange multiplier associated with the constraint on the surface elevation. Plugging the constraint equation in the first equation of (<ref>) one directly gets that q(t,x)=-xδ̇+q_ i(t), where q_ i is a time dependent function corresponding to the average discharge over the interior region. Using this relation and applying to the second equation in (<ref>) provides an elliptic equation for P, -( 1/ h_ wP)=-δ̈+[h_ wζ_ w+(1/h_ w(-xδ̇+q_ i)^2)], for x∈ (-ℓ,ℓ) and with h_ w=h_ eq+δ. If we know the boundary values of P at x=-ℓ+0 and x=ℓ-0, this elliptic equation can be solved and it provides an expression for P in terms of h_ eq, δ, q_ i and of this boundary data. Using this expression in the second equation of (<ref>) then provides an expression for d/dtq_ i in terms of the same quantities. We also need coupling conditions at the contact points x=∓ℓ between the exterior and interior region. There are two of them, * Continuity of the horizontal discharge. Taking into account the expression of the discharge derived above in the interior region, this condition yields q(t,-ℓ-0)=ℓδ̇+q_ i(t) q(t,ℓ+0)=-ℓδ̇+q_ i(t). * Conservation of the total energy. Imposing conservation of the total (i.e., fluid+solid) energy classically provides the boundary data needed to solve the elliptic equation derived for the surface pressure in the interior region <cit.>. We refer to <cit.> for the derivation of these boundary data in the present context, but do not provide it here explicitly for the sake of conciseness. To summarize, we have the standard Boussinesq-Abbott equation in the exterior region, with boundary condition on the discharge q at ∓ℓ that are given in terms of two functions of time, namely, q_ i and δ. As said above, the fact that the elliptic equation for the pressure in the interior region has been solved provides an evolution equation for q_ i; the last thing to do is therefore to determine δ. If the object is fixed or in forced motion then δ is given; otherwise, it is of course given by Newton's equation. The three cases can be considered simultaneously by allowing an external force to be applied to the solid (if the solid is fixed or in forced motion, this external force F_ ext represents the vertical force exerted on the solid to maintain it fixed or with the desired motion). The outcome of this analysis, as shown[The presence of the external force is not taken into account in that reference. It is however straightforward to add it in Newton's equation; note that in the present dimensionless setting, the force has been nondimensionalized by 2ℓρ g.] in Theorem 3.1 of <cit.> is that the wave-structure interaction problem under consideration can be reduced to a transmission problem. Using the notations f =1/2( f(ℓ)-f(-ℓ)) f =f(ℓ)-f(-ℓ) for all f∈ C((-∞,-ℓ]∪[ℓ,∞)), this transmission problem can be written ζ+ q=0, (1-κ^2^2) q +=0 t>0, x∈ where is the shallow water momentum flux given by (<ref>), and with transmission conditions across the floating object given by q=q_ i q=-2ℓδ̇, where q_ i and δ are functions of time solving α(δ)d/dtq_ i+α'(δ)δ̇q_ i =-1/2ℓζ+𝔊, τ_κ(δ)^2δ̈+δ-β(δ)δ̇^2-1/2α'(δ)q_ i^2 =ζ+𝔊+F_ ext, where 𝔊 is the function defined on by 𝔊=1/2q^2/h^2-κ^21/h q, and where the explicit expression of the functions α, τ_κ and β, of no importance at this point of the discussion, are provided in <ref> of Appendix <ref>. We just want to emphasize that the coefficient τ_κ(δ)^2 in front of δ̈ in (<ref>) takes into account the contribution of the added mass effect (when a solid moves in a fluid, not only must it accelerate its own mass but also the mass of the fluid around it). The initial value problem corresponding to (<ref>)-(<ref>) is studied and solved in <cit.>. Its structure is that of a transmission problem coupled with a set of ODEs on q_ i and δ. This coupling acts in both ways: on the one hand, it is necessary to know q_ i and δ in order to solve the transmission problem (<ref>)-(<ref>) and on the other hand, one needs to know the solution (ζ,q) of this transmission problem to compute the source term in the right-hand side of (<ref>)-(<ref>). From the numerical view point, this last step is not easy to treat since one has to compute the numerical trace of ζ and q at the contact points x=±ℓ. The key ingredient we propose here to overcome this difficulty is to work with an augmented formulation of the problem, with additional functions of time involved in the system of ODEs for δ and q_ i, but where the computation of such traces is no longer needed. §.§ The trace equations The source terms in the right-hand sides of (<ref>)-(<ref>) involve the trace of ζ+𝔊 at x=±ℓ, with 𝔊 given by (<ref>). Since (<ref>) implies that q_|_x=±ℓ=∓ℓδ̇+q_ i and remarking that one deduces from the first equation of (<ref>) that q=-^2 ζ, we have 𝔊_|_x=±ℓ=1/2(∓ℓδ̇+q_ i/1+ζ_±)^2+κ^21/1+ζ_±ζ̈_±, with ζ_±:=ζ_|_x=±ℓ. The difficulty therefore lies in the computation of the trace of ζ at x=±ℓ and of their second time derivative. The augmented formulation consists in treating ζ_± as a new unknown function of time instead of getting it by taking the traces of ζ at the contact points. This is made possible by the following proposition which provides a second order ODE satisfied by ζ_+ and ζ_-. This requires first the introduction of the Dirichlet and Neumann inverses of the operator (1-κ^2^2) on , respectively denoted by R_0 and R_1. They are defined for all F∈ L^2() by R_0 F= u (1-κ^2 ^2)u= F , u_|_x=±ℓ=0, and R_1 F= v (1-κ^2 ^2)v= F , v_|_x=±ℓ=0. We can now state the following proposition. Note that the ODEs satisfied by ζ_± only make sense in the presence of dispersion (κ >0). Let f and g be two continuous functions of time. If (ζ,q) is a smooth solution to ζ+ q=0, (1-κ^2^2) q +=0, t>0, x∈ with as in (<ref>) and with transmission conditions q(t)=f(t) q(t)=2g(t), t>0 then ζ_±=ζ_|_x=±ℓ solve the ODEs ^2 + 1/κ^2+ /κ^2( 1/2^2 + (f+g)^2/1+) =1/κ^2 (R_1 )_++1/κ (ḟ+ġ), ^2 + 1/κ^2+ /κ^2( 1/2^2 + (f-g)^2/1+) =1/κ^2 (R_1 )_–1/κ (ḟ-ġ), where we used the notation (R_1 )_±= (R_1 )_|_x=±ℓ. Applying R_0 to the second equation in (<ref>) and using the boundary condition (<ref>), one gets q+R_0=(ḟ±ġ) exp(- x∓ℓ/κ) ^±. Remarking further that R_0= R_1, the problem is therefore reduced to ζ+ q=0, q+ R_1=(ḟ±ġ) exp(- x∓ℓ/κ). Differentiating with respect to x the second equation of (<ref>) and using the fact that q=-^2 ζ, one gets -^2 ζ+^2 R_1 =∓1/κ (ḟ±ġ) exp( -1/κ(x∓ℓ)). Since moreover ^2 =-1/κ^2(1-κ^2 ^2)+1/κ^2, we deduce that -^2 ζ- 1/κ^2 +1/κ^2 R_1 =∓1/κ (ḟ±ġ) exp( -1/κ(x∓ℓ)). Taking the trace at x=±ℓ, and substituting _|_x=±ℓ= +( 1/2^2 + (f± g)^2/1+), we obtain the equations stated in the proposition. §.§ The augmented formulation Proposition <ref> can be applied to the wave-structure interaction system (<ref>)-(<ref>) with f=q_ i and g=-ℓδ̇. Together with (<ref>)-(<ref>), this shows that q_ i, δ, and solve the second order differential system ℳ[δ,] d/dt[ q_ i; δ̇; ζ̇_+; ζ̇_- ] + [ 1/2ℓζ; δ-ζ; ζ_+; ζ_- ] = 𝔔[δ,](q_ i,δ̇,) + [ 0; F_ ext; (R_1 )_+; (R_1 )_- ], where ℳ[δ,] is the invertible matrix ℳ[δ,]:= ( [ α(δ) 0 κ^2/2 ℓ1/1+ - κ^2/2 ℓ1/1+; 0 τ_κ(δ)^2 - 1/2κ^2/1+ - 1/2κ^2/1+; -κ ℓκ κ^2 0; κ ℓκ 0 κ^2 ]), while 𝔔[δ,](q_ i,δ̇,) is a four-dimensional vector whose entries are quadratic forms in (q_ i,δ̇,) with coefficients depending on δ, and (the exact expression of these terms is of no importance at this point, and we refer the reader to Appendix <ref>). The second order differential system (<ref>) can classically be transformed into a first order ODE on q_ i, δ̇, , , δ, and with forcing terms (R_1)_± and F_ ext (see Appendix <ref>). The augmented formulation is obtained by replacing and by two additional unknowns and in this first order ODE. It reads therefore ζ+ q=0, (1-κ^2^2) q +=0 t>0, x∈, where is as in (<ref>), and with transmission conditions across the floating object given by q=q_ i q=-2ℓδ̇, where q_ i and δ are functions of time determined by the first order ODE d/dtΘ=𝒢(Θ, (R_1)_+,(R_1)_-,F_ ext), with Θ:=(q_ i,δ̇,,,δ,,)^ T and where 𝒢 is a smooth function of its arguments and whose exact expression is given in Appendix <ref>. It is a consequence of Proposition <ref> below that if the initial data for and are chosen appropriately, then = and = for all times, as expected (see Proposition <ref> below). The difference between the augmented formulation (<ref>)-(<ref>) and the original formulation (<ref>)-(<ref>) lies in the ODE used to determine the functions δ and q_ i involved in the transmission conditions. In the original formulation, one has a first order 3-dimensional ODE (on δ, δ̇ and q_ i), which is forced by F_ ext, (R_1 )_|_x=±ℓ, ζ_|_x=±ℓ and d^2/dt^2ζ_|_x=±ℓ. In the augmented formulation, the ODE is of higher dimension, namely, it is a first order 7-dimensional ODE (on δ, δ̇, q_ i, and ), but it is forced only by F_ ext and (R_1 )_|_x=±ℓ. These two quantities do not raise any difficulty since F_ ext is a given external force and (R_1 )_|_x=±ℓ can easily be computed numerically (see <ref> below), contrary to the traces of ζ and ^2 ζ at the contact points that appear in the original formulation and that are very delicate to compute. §.§ Transformation into an initial value problem We reformulate in this section the wave-structure transmission problem (<ref>)-(<ref>) in the form of an initial value problem that is easier to handle from a numerical point of view. This formulation is the new augmented formulation (with additional variables ) we shall base our numerical schemes on. In <cit.> (see also the lecture notes <cit.>), the well-posedness of the standard formulation (<ref>)-(<ref>) is proved, and it could similarly be obtained for the augmented formulation; for the sake of conciseness, we do not give here such a result and just prove that both formulations have the same regular solutions, and that the additional variables coincide with the traces ζ_|_x=±ℓ under certain compatibility conditions on the initial data. We use the following notation for the source term in the reformulated momentum equation, 𝒮_±(Θ, (R_1)_±,F_ ext):=𝒢_1(Θ, (R_1)_±,F_ ext)∓ℓ𝒢_2(Θ, (R_1)_±,F_ ext), where 𝒢_1 and 𝒢_2 denote the first two components of the mapping 𝒢 in the right-hand side of the ODE (<ref>). We also recall that we denote ^-=(-∞,-ℓ) and ^+=(ℓ,∞) the two connected components of the fluid domain . Let (ζ,q) and Θ=(q_ i,δ̇,,,δ,,)^ T be a regular solution to the transmission problem (<ref>)-(<ref>) with initial data U=(ζ^ in,q^ in) and Θ^ in. Then (ζ,q) and Θ also solve the initial value problem ζ+ q=0, q+ R_1 =𝒮_±(Θ, (R_1)_±,F_ ext)exp(-1/κx∓ℓ) ^±, and d/dtΘ=𝒢(Θ, (R_1)_+,(R_1)_-,F_ ext). The converse is true, provided that the initial data satisfy the compatibility conditions q^ in=Θ^ in_1, q^ in=-2ℓΘ^ in_2. If moreover the initial data also satisfy ζ^ in_|_x=ℓ=Θ^ in_6, ζ^ in_|_x=-ℓ=Θ^ in_7, -( q^ in)_|_x=ℓ=Θ^ in_3, -( q^ in)_|_x=-ℓ=Θ^ in_4, then for all times, one has ζ_|_x=±ℓ=. The proposition deals with the most general situation to cover in a unified way all the situations considered in this article. It can be simplified in various cases, as shown in Appendix <ref>. For instance, * When the object is freely floating, one takes F_ ext≡ 0. * When the data and the object are symmetric with respect to the vertical axis x=0, then q_ i=0. * If the object is in forced motion, i.e. if δ≡δ_ forced for some given function δ_ forced, then the ODE (<ref>) can be reduced to a 5-dimensional ODE on (q_ i,ζ̇_+,ζ̇_+,ζ_+,ζ_-)^ T. Note that in this situation, an external force is needed to maintain the object fixed (the exact expression of this force is derived in Remark <ref>). Let us first prove the direct implication. Proceeding as in the proof of Proposition <ref>, we can rewrite the second equation of (<ref>) on each component ^± of under the form q + R_1=d/dt(q_|_x=±ℓ)exp(-1/κx∓ℓ) ^±. From the transmission conditions (<ref>), we have q_|_x=±ℓ=∓ℓδ+q_ i so that the result follows from the observation that, owing to (<ref>), one has d/dtq_ i=𝒢_1(Θ, (R_1)_±,F_ ext) d/dtδ̇=𝒢_2(Θ, (R_1)_±,F_ ext). Conversely, if (ζ,q) solves (<ref>), it suffices to apply (1-κ^2^2) to the second equation to show that (ζ,q) solves (<ref>). The equation (<ref>) on Θ is the same as (<ref>), so that the only thing we need to prove is that the transmission conditions (<ref>) hold. Taking the trace of the second equation of (<ref>) at the contact points and taking the average and the jump, we find that d/dtq=𝒢_1(Θ,(R_1)_±,F_ ext) d/dtq=-2ℓ𝒢_2(Θ,(R_1)_±,F_ ext), or equivalently (from the definition of 𝒢), d/dtq=d/dtq_ i d/dtq=d/dt(-2ℓδ̇). This shows that the time derivative of the transmission conditions (<ref>) are satisfied; the compatibility conditions (<ref>) show moreover that the transmission condition condition is satisfied at t=0. It is therefore satisfied for all times. For the last assertion, we can use Proposition <ref> to show that ζ_|_x=±ℓ and satisfy the same second order ODE in time. The additional condition (<ref>) ensures that these initial data and the initial value of the first time derivative coincide (we also used the first equation of (<ref>) to substitute d/dt(ζ_|_x=±ℓ)=-( q)_|_x=±ℓ). They are therefore identical for all times. § NUMERICAL SCHEMES We present in this section one first order and one second order numerical scheme for the resolution of the augmented formulations derived in this article. We explain these schemes for the general formulation (<ref>)-(<ref>). We recall that these equations are conservation laws with a nonlocal flux and and an exponentially localized source term, U+(𝔉_κ(U) )=𝒮_±(Θ,(R_1)_-,(R_1)_+,F_ ext) 𝔟(x∓ℓ) ℰ_± with U=(ζ,q)^T and where 𝔉_κ(U) is the nonlocal flux given by 𝔉_κ(U)=(q,R_1 )^T, while the source terms 𝒮_± are as in (<ref>) and 𝔟 is the shape of source term 𝔟(x) =( 0, exp(- x/κ) )^ T. The quantity Θ is defined as Θ=( q_ i, δ̇,,,δ,,)^ T and solves a system of 7 first order ODEs forced by (R_1)_+, (R_1)_- and F_ ext, d/dtΘ=𝒢(Θ,(R_1)_+,(R_1)_- ,F_ ext). As explained in Remark <ref>, in some of the examples considered in this paper, the ODE (<ref>) can be reduced to a possibly lower dimensional ODE; we refer to Appendix <ref> where such simplifications are derived. §.§ Notations We gather here the main notations used to write our numerical schemes. We first set our notations for the discretized quantities, and then explain how we define the discrete version of the nonlocal operator R_1 defined in (<ref>). §.§.§ Discretization We denote by Δ x the mesh size and decompose the two components ^- and ^+ of the exterior domain into a disjoint union of cells (see figure <ref>), ^-=(⋃_i=-∞^-1𝒞_i )∪𝒞_0^- ^+=𝒞_0^+∪(⋃_i=1^∞𝒞_i ) with 𝒞_i=(x_i-1/2,x_i+1/2) i≠ 0 𝒞_0^-=(x_-1/2,-ℓ), 𝒞_0^+=(ℓ,x_1/2) and where x_i+1/2=-ℓ+(i+1/2)Δ x i<0 x_i-1/2=ℓ+(i-1/2)Δ x i>0. Of course, the numerical domain is of finite size but we work with large enough domains so that the influence of the left and right boundaries of the numerical domain are not seen in the computations near the solid object. For the sake of clarity, we do not mention these boundaries in the presentation of the numerical scheme. We also write Δ t>0 the time stepping and denote by U^n=U(nΔ t) , Θ^n= Θ( n Δ t ) F_ ext^n=F_ ext(nΔ t) the values of U=(ζ,q)^ T, of the ^7-valued vector Θ involved in the ODE (<ref>), and of the external force F_ ext at each time step. We further denote by U^n_i (i∈(ℤ\{0})∪{0^-,0^+}) the approximation of 𝒰^n in the middle of the cell 𝒞_i furnished by the numerical scheme. §.§.§ About the nonlocal operator R_1 The equations (<ref>)-(<ref>) involve the quantities R_1 and (R_1)_±, where we recall that R_1 is the inverse of (1-κ^2^2) on ^-∪^+ with Neumann boundary condition at ±ℓ, as defined in (<ref>), and that (R_1)_± stands for the trace of R_1 at ±ℓ. We keep the same notation R_1 for the discrete inverse of the operator (1-κ ^2) with homogeneous Neumann condition at the boundary. We use here a standard centered second order finite difference approximation for the discretization of ^2. More precisely, if F=(f_i)_i≥ 1, we denote by R_1 F the vector R_1 F=V where V=(v_i)_i≥ 1 is given by the resolution of the equations v_i -κ^2v_i+1-2 v_i +v_i-1/Δ x^2=f_i, i≥ 2 while, for i=±1 a second order discretization of the Neumann boundary condition leads to v_-1 -κ^2 2/3v_-2- v_-1/Δ x^2=f_-1 v_1 -κ^2 2/3v_2- v_1/Δ x^2=f_1. Similarly, we still denote by (R_1F)_± the discrete version of the traces R_1F at the boundaries; they are naturally defined by the second order approximation (R_1 F)_-=4/3v_-1-1/3v_-2 (R_1 F)_+=4/3v_1-1/3v_2. §.§ A first order scheme We propose here an adaptation of the Lax-Friedrichs scheme for the conservation laws with nonlocal flux (<ref>). This scheme is an extension of the scheme used in <cit.> for the numerical simulation of the Boussinesq equations with generating boundary condition (i.e. with data on ζ at the entrance of the numerical domain). It reads U_i^n+1-U_i^n/Δ t+1/Δ x( 𝔉^n_κ,i+1/2-𝔉^n_κ,i-1/2)=𝒮_±^n𝔟_i, ± i≥ 1, n≥ 0, with 𝒮_±^n=𝒮_±(Θ^n,(R_1^n)_+,(R_1^n)_-,F_ ext^n) 𝔟_i=𝔟(iΔ x) ± i>0; the discrete flux correspond to the Lax-Friedrichs scheme, 𝔉_κ,i+1/2^n=1/2( 𝔉_κ,i+1^n+𝔉_κ,i^n)-Δ x/2Δ t( U_i+1^n-U_i^n ) i≤ -2, 𝔉_κ,i-1/2^n=1/2( 𝔉_κ,i^n+𝔉_κ,i-1^n)-Δ x/2Δ t( U_i^n-U_i-1^n ) i≥ 2, with the notations 𝔉_κ,i^n=( q_i^n, (R_1^n)_i )^ T ^n=(U^n); finally, for i=± 1, we must adapt (<ref>) in the following way, 𝔉_κ,-1/2^n=1/2( 𝔉_κ,0^-^n+𝔉_κ,-1^n)-Δ x/2Δ t( U_0^-^n-U_-1^n ) 𝔉_κ,1/2^n=1/2( 𝔉_κ,1^n+𝔉_κ,0^+^n)-Δ x/2Δ t( U_1^n-U_0^+^n ) with 𝔉_κ,0^±^n=( q^n_0^±, (R_1^n)_±)^ T; the component (R_1^n)_± is computed according to (<ref>), but we still need to define q_0^±^n. By definition q^n_0^± is the approximation at time nΔ t of the trace of the discharge q at ±ℓ. From the transmission conditions (<ref>) of the continuous problem, we have q_|_x=±ℓ=q_ i∓ℓδ̇. Recalling also that q_ i and δ̇ are respectively the first and second components of Θ, this relation can be rewritten q_|_x=±ℓ=Θ_1∓ℓΘ_2. At the discrete level, this leads to the following definition for q^n_0^±, q^n_0^±=Θ^n_1∓ℓΘ^n_2. The equation (<ref>) is discretized with a first-order explicit Euler scheme: Θ^n+1 -Θ^n /Δ t = 𝒢(Θ^n,(R_1^n)_+,(R_1^n)_- ,F_ ext^n). The equations (<ref>)-(<ref>) furnish an induction relation that allows to compute U^n+1 and Θ^n+1 in terms of U^n and Θ^n. It need of course to be initiated with initial data that are taken of the form U_i^0=(ζ^ in_i,q^ in)_i^ T ( i∈(ℤ\{0})∪{0^-,0^+}), with ζ^ in and q^ in describing the initial wave field in the exterior domain, and Θ^0=(q_ i^ in, δ^(1), ζ_+^(1), ζ_-^(1), δ^(0), ζ_+^(0), ζ_-^(0))^ T satisfies the discrete version of the compatibility conditions of Proposition <ref>, namely, q^ in=q_ i^ in, q^ in=-2ℓδ^(1), ζ_±^(0)=ζ_±^ in, ζ^(1)_±=-( q^ in)_±. §.§ A second order scheme We propose here an adaptation of the MacCormack scheme for the conservation laws with nonlocal flux (<ref>), coupled with a second-order Heun integration scheme for the system of 7 first-order ODEs (<ref>). Both are predictor-corrector schemes. We use the same notations as in the previous subsection and can decompose the scheme into four main steps: - Prediction step for the MacCormack scheme. This reads U_i^n,*-U_i^n/Δ t+1/Δ x( 𝔉^n_κ,i-𝔉^n_κ,i-1)=𝒮_+^n𝔟_i, i > 1, n≥ 0, with U_i^n,*=(ζ_i^n,*,q_i^n,*)^ T. We use a symmetric scheme with respect to x=0 so that, for negative values of i, we use a forward rather than backward derivative for the flux, U_i^n,*-U_i^n/Δ t+1/Δ x( 𝔉^n_κ,i+1-𝔉^n_κ,i)=𝒮_-^n𝔟_i, i < -1, n≥ 0, For i = 1 and i=-1 it reads U_1^n,*-U_1^n/Δ t+1/Δ x( 𝔉^n_κ,1-𝔉^n_κ,0^+) =𝒮_+^n𝔟_1, U_-1^n,*-U_-1^n/Δ t+1/Δ x( 𝔉^n_κ,0^--𝔉^n_κ,-1) =𝒮_-^n𝔟_-1, for n≥ 0 and with 𝔉_κ,0^±^n as in (<ref>). - Prediction step for the Heun scheme. This step is similar to a first-order explicit Euler scheme, Θ^n,* - Θ^n /Δ t = 𝒢(Θ^n,(R_1^n)_+,(R_1^n)_- ,F_ ext^n). - Corrector step for the MacCormack scheme. With the quantities computed in the previous steps, we define ^n,*=(U^n,*) q^n,*_0^±=Θ^n,*_1∓ℓΘ^n,*_2 as well as an intermediate non-local flux and an intermediate source term, 𝔉_κ,i^n,* =( q_i^n,*, (R_1^n,*)_i )^ T i≥ 1, n≥ 0, 𝒮_±^n,* =𝒮_±(Θ^n,*,(R_1^n,*)_+,(R_1^n,*)_-,F_ ext^n) n≥ 0. The correction step for the MacCormack scheme then reads U_i^n+1-U_i^n/Δ t+𝔉^n_κ,i -𝔉^n_κ,i-1+ 𝔉^n,*_κ,i+1 -𝔉^n,*_κ,i/ 2Δ x = 𝒮_+^n + 𝒮_+^n,*/2𝔟_i i≥ 1, for n≥ 0. Here again, we take a symmetric scheme so that for i≤ -1, we take a forward difference of 𝔉^n and a backward difference of 𝔉^n,*, U_i^n+1-U_i^n/Δ t+𝔉^n_κ,i+1 -𝔉^n_κ,i+ 𝔉^n,*_κ,i -𝔉^n,*_κ,i-1/ 2Δ x = 𝒮_-^n + 𝒮_-^n,*/2𝔟_i i≤ -1; in particular, there is no need to defined boundary values 𝔉_κ,0^±^n,*, of the intermediate flux. -Correction step for the Heun scheme. This reads, for n≥ 0, Θ^n+1 - Θ^n /Δ t = 𝒢(Θ^n,(R_1^n)_± ,F_ ext^n) + 𝒢(Θ^n,*,(R_1^n,*)_± ,F_ ext^n+1) /2 . The initial data have the same form as for the first order scheme described in the previous subsection. § NUMERICAL SIMULATIONS We have seen in <ref> that the wave-structure interaction problem under consideration in this paper can be reduced to a transmission problem potentially coupled to two forced ODEs for the vertical displacement δ of the object and the mean discharge q_ i under the object. We first consider in <ref> a situation where this coupling is absent. This corresponds to the case where a wave is generated in a wave tank by moving the object vertically with a prescribed motion. This example is of particular interest since it provides an efficient way to generate waves for the Boussinesq equations at the entrance of a numerical domain if we have at our disposal time series of the horizontal discharge at the boundary, hereby extending the result of <cit.> where data on the surface elevation were used. We then consider in <ref> the return to equilibrium problem (also called decay test or drop test by engineers) which consists in releasing an object from an out of equilibrium position and to observe its oscillations. These examples involve the coupling of the transmission problem with the ODE on δ. In the linear case, we are able to derive exact explicit solutions that we compute to check the numerical convergence of our scheme; the nonlinear case is then investigated and the importance of the dispersive effects pointed out by comparing with simulations based on the nonlinear shallow water equations instead of the Boussinesq system. We then investigate in <ref> a configuration where the transmission problem is coupled to the interior discharge q_ i, namely, the interaction of waves with a fixed partially immersed object. Here again, we derive an explicit exact solution in the linear case that we use to validate that this coupling is also of second order. The nonlinear case is then considered. Finally, a configuration involving the most general coupling (with both δ and q_ i is considered in <ref>; it consists in the interaction of a solitary wave with an object freely floating in the vertical direction. §.§ Wave generation The first physical configuration we consider consists in creating waves in a fluid initially at rest by moving up and down a partially immersed object. By symmetry, it is enough to consider the waves in the right component ^+=(ℓ,∞) of the fluid domain. As shown in <ref> of Appendix <ref>, the mathematical formulation of this problem is a particular case of the following initial boundary value problem with boundary condition on the discharge q, namely, ζ+ q=0, (1-κ^2^2) q +=0, t>0, x>ℓ with as in (<ref>) and with boundary condition q_|_x=ℓ(t)=g(t), t>0 and initial condition (ζ,q)_|_t=0(x)=(ζ^ in,q^ in)(x), x>ℓ, and where g, ζ^ in and q^ in are some given functions satisfying the compatibility condition q^ in(x=0)=g(t=0), which is obviously necessary to obtain solutions that are continuous at the origin in time and space. This problem is somehow symmetric to the one considered in <cit.> where a boundary condition on ζ rather than q was considered and where a first order scheme was proposed. For the wave generation problem, one has (ζ^ in,q^ in)=(0,0) and g(t)=-ℓδ̇_ forced, where δ_ forced is the prescribed vertical displacement of the center of mass of the object. Contrary to the other physical configurations we consider in this article, the wave generation problem (or more generally, the initial boundary value problem (<ref>)-(<ref>)) does not require the resolution of an ODE to determine the boundary data on the discharge. The formulation as an initial value problem given in Proposition <ref> then reduces to ζ+ q=0, q+ R_1=𝒮_+(t) exp(- x-ℓ/κ) 𝒮_+(t)=ġ(t), for x>ℓ and with initial condition (<ref>) satisfying (<ref>). The numerical scheme presented in <ref> and <ref> can be simplified by skipping the second and fourth step related to the Heun scheme, and by taking simply for the first-order scheme: 𝒮^n=g^n+1-g^n/Δ t, and for the second-order scheme: 𝒮^n=g^n+1-g^n-1/2 Δ t 𝒮^n,*=g^n+2-g^n/2 Δ t. The wave generation problem gives us the opportunity to validate our numerical code with a nonlinear case. The Boussinesq-Abbott equations admits solitary waves solutions of the form (ζ,q)(t,x)=(ζ_c(x-x_0-ct), cζ_c(x-x_0-ct)), where c>0, x_0∈ and ζ_c is a smooth, even and fastly decaying function. These solutions can be used to test the precision of the code. For the Boussinesq-Abbott equations, there is no explicit formula for ζ_c and it is determined by the resolution of a nonlinear second order ODE, namely, c^2κ^2ζ_c”-c^2 ζ/1+ζ+^2ζ^2+2ζ/2=0, with c^2=/63ζ_ max^2+ζ_ max^3/ζ_ max-ln(1+ζ_ max)/ (see for instance <cit.> for more details on the computations); these formula furnish a family of solitary waves parametrized by their maximal amplitude ζ_ max. Solving the above ODE with a standard high precision ODE solver provides us a solution to (<ref>) that we use to assess the precision of the numerical solution obtained with our numerical scheme for (<ref>) with discharge boundary data g(t)=cζ_c(0-x_0-ct) and initial data (ζ^ in,q^ in)=(ζ_c(x-x_0-0), cζ_c(x-x_0-0)). For very fine meshes, spurious oscillations may appear. These oscillations are reminiscent of the oscillations that appear when using dispersive schemes (such as the Lax-Wendroff or MacCormack schemes) to simulate shock waves. Flux-limiters methods are typically used to control this phenomenon <cit.>. Here, these oscillations are created at the boundary, whose position is fixed, and we use a very simple efficient method consisting in adding an artificial viscosity on a finite number n_0 of cells near the boundary. More precisely, in the right-component of the fluid domain (the left component is treated symmetrically) we add the following term in the right-hand side of the first component of (<ref>), νΔ x/Δ t( ζ^n_i+1-2ζ^n_i+ζ^n_i-1) 1≤ i≤ n_0 with ν>0 a fixed coefficient, that we take equal to 2.136. This corresponds to an artificial viscosity ν(Δ x)^3/Δ t^2 ζ; for a fixed ration Δ x/ Δ t, this viscosity is of order 2 and therefore does not alter the overall second order of the MacCormack scheme. We choose for this test ζ_ max=1 and 3κ^2= = 0.3. Once c is computed, we solve the differential equation (<ref>) with a high order numerical method in order to obtain our reference solution. The size of the computational domain is L = 6. The space step is computed as Δ_x = L/N, with N = 200,240,300,400. We take a constant time step Δ_t = 0.8 Δ_x. The maximum of the soliton is initially located on the left of the computational domain, at x = -L/2, so that the initial datum in the small domain is almost zero, and then the soliton propagates inside it. The numerical results at final time T_f = 20 are presented for both schemes on Figures <ref> and <ref>, showing respectively a first-order and a second-order convergence. These results correspond to a final time where the soliton has completely entered the computational domain, so that the influence of the dispersive boundary layer due to the generating condition at the left side of the domain is nearly zero. However, if one would choose a final time where the soliton is still entering the computational domain, then one would notice that the error for the variable ζ is only first-order in the vicinity of the left boundary of the computational domain, while it is still second-order for the variable q. This first-order error is probably due to a lack of accuracy in the numerical evaluation of the spatial derivative of q near the left boundary. §.§ Return to equilibrium We consider here the return to equilibrium problem (also called decay test), which consists in dropping the floating object from an out of equilibrium position and to let it oscillate vertically and stabilize towards its equilibrium position. This is a problem of practical importance because it is used by engineers to characterize some buoyancy properties of the solid, and theoretically because it leads to simpler equations than the general wave-structure equations. For instance, in the nonlinear non dispersive case (≠ 0, κ=0), it is possible to show that the dimensionless vertical displacement δ of the solid with respect to its equilibrium position is fully described by a second order nonlinear scalar ODE <cit.> and that in the linear dispersive case (=0, κ≠ 0) it is governed by a second order linear integro-differential equation <cit.>. In the nondispersive case, similar equations have also been derived in the presence of viscosity <cit.> in the linear case, as well as in the 2D radial and partially linear case <cit.>. In the presence of nonlinear and dispersive effects (≠ 0, κ≠ 0), it does not seem possible to derive such a simple equation for the motion of the solid and the wave-structure equations must therefore be solved. As for the wave generation problem, there is a symmetry in this problem which allows to consider only the right part ^+ of the fluid domain and the governing wave-structure interaction equations reduce to an initial boundary value problem of the form (<ref>) with g=-ℓδ̇. The difference is that the vertical displacement δ is no longer a given function δ but is found through the resolution of Newton's equation (see (<ref>) for its general expression). Since this equation involves the trace of ζ at the contact point, we have to work with the augmented formulation provided by Proposition <ref>. Since in this particular case, one has F_ ext≡ 0, q_ i≡ 0 and ζ_+=ζ_-, the 7-dimensional ODE on Θ can be simplified into a simpler 4-dimensional ODE (see <ref> in Appendix <ref>). The interest of this test case is that, since the interior discharge identically vanishes, it allows us to investigate specifically the coupling between the waves and the vertical displacement of the object. We first consider in <ref> the linear case for which explicit solutions exist and can be used to investigate the precision of the code, and then show in <ref> some simulations in the nonlinear case. §.§.§ Convergence error in the linear case We first consider the linear case (=0) since in the case, it was shown in <cit.> that the evolution of δ can be found by solving a linear second order integro-differential equation, namely, (τ_κ(0)^2+ℓκ)δ̈+ℓ𝒦^1_κ * δ̇+δ=0, with initial conditions δ(0)=δ_0 and δ̇(0)=0 and where τ_κ(·) is defined in Appendix <ref> and the kernel 𝒦_κ^1 is given in terms of the first Bessel function J_1 by the relation 𝒦^1_κ(t)=1/tJ_1(t/κ). The solution of this integro-differential equation is given explicitly by taking the Laplace transform (denoted with a hat), δ(s)=τ_κ(0)^2s+ℓ√(1+κ^2 s^2)/τ_κ(0)^2 s^2+sℓ√(1+κ^2 s^2)+1δ_0, s∈ℂ_0, where ℂ_0 is the half-plane of complex numbers s such that s>0. The vertical displacement deduced from the exact formula (<ref>), and denoted δ_ exact is compared with the surface elevation δ found by solving the wave-structure equations using the numerical schemes presented in <ref>. In order to discard possible numerical errors in the computation of the inverse Laplace transform one has to apply to (<ref>) two different inversion methods (the Euler and Talbot methods <cit.>); we impose that they match up to 10^-4 terms to consider the solution provided as relevant to be considered as an exact solution for our convergence studies. In our numerical tests we chose 3κ^2 = 0.3 or 3κ^2 = 0.1, h_eq = 1-0.3, l = 4 and the size of the computational domain L = 30. The space steps Δ_x were computed as Δ_x = (L-l)/(N+1), with N=300,400,500,600 for the first-order scheme and N = 60,120,240,320 for the second-order scheme. The time step was computed as Δ_t = 0.9 Δ_x. The numerical results at final time T_f = 15 computed with the first-order and the second-order schemes show respectively a first-order convergence, see Figure <ref> and a second-order convergence, see Figure <ref>. On Figure <ref> we compare the numerical results for the two schemes for 3κ^2 = 0.3 and N = 100, showing evidence that it is advantageous to use the second-order scheme. §.§.§ The nonlinear case We do not have any exact solution to compare with in the nonlinear case, and we therefore use mesh-convergence to study the order of our schemes. The reference solution is computed with a very refined mesh: N = 2400. We chose ϵ = 0.3, 3κ^2 = 0.3 or 3κ^2 = 0.1, h_eq = 1-0.3, l = 4 and the size of the computational domain L = 30. The space steps Δ_x were computed as Δ_x = (L-l)/(N+1), with N=160,200,240,300,400 for the first-order scheme and N = 120,160,200 for the second-order scheme. The meshes are defined so that the points of the coarse meshes always coincide with the points of the very refined mesh of the reference solution. The time step was computed as Δ_t = 0.7 Δ_x. The numerical results at final time T_f = 15 computed with the first-order and the second-order schemes show respectively a first-order convergence, see Figure <ref> and a second-order convergence, see Figure <ref>. We perform another test to study qualitatively the influence of the dispersion on the nonlinear decay test. We compare the trajectories obtained for different values of κ with the trajectory obtained in the non dispersive case (κ=0). In the latter case, it was shown in <cit.> that the evolution of δ can be found, under some smallness assumptions, by solving a second order differential equation of the form τ_0(δ)^2δ̈+ℓδ̇+δ + B(δ) δ̇^2 =0, where B(·) is a smooth function whose exact expression can be found in Corollary 4.3 of <cit.>. This nondispersive solution is used as reference to illustrate the contribution of the dispersive terms. On Figure <ref> we compare this solution with the numerical results for =0.3, h_eq = 1-0.3, l = 0.25 and various values of 3κ^2 ranging from 0.05 to 1. §.§ Waves interacting with a fixed object We consider here waves that are interacting with a fixed partially immersed object. This is a particular example of prescribed motion (δ≡ 0), but contrary to the wave generation problem, the waves are not supposed to be symmetric with respect to the central axis {x=0}. It follows that the interior discharge q_ i does not vanish identically and that it must be found by solving the ODE (<ref>). From a mathematical point of view, this physical configuration is somehow symmetric to the return to equilibrium problem in the sense that it allows one to focus on the coupling of the fluid equation with the dynamic of the interior discharge q_ i (since δ≡ 0), while for the return to equilibrium problem the coupling was only with the vertical displacement δ (since in that case q_ i≡ 0). In this case, the 7-dimensional ODE on Θ of the augmented formulation given in Proposition <ref> can be reduced to a 5-dimensional ODE, as explained in <ref> of Appendix <ref>. We first study in <ref> the linear case for which we exhibit a family of explicit solutions that we use to validate our code; the nonlinear case is then considered in <ref> §.§.§ Convergence error in the linear case In order to investigate the ability of our scheme to correctly describe the coupling of the Boussinesq-Abbott equation with the average interior discharge q_ i we exhibit an explicit solution of the equations in the linear case (=0). In that case, the wave-structure equations (<ref>)-(<ref>) take the form ζ+ q=0, (1-κ^2^2) q+ζ=0 ^± with transmission conditions q=0 q=q_ i and where q_ i solves the forced ODE α(0)d/dtq_ i=-1/2ℓζ+κ^2 ^2 ζ. A family of exact solutions which are periodic in time is given in the following proposition (which can be checked with basic computations omitted here). Let k ≠ 0 and ω≠ 0 satisfy the the dispersion relation ω^2= k^2/1+ κ^2 k^2. For all (ζ_+^c, ζ_-^c, ζ^s, q_+^s, q_-^s, q^c)∈^6, the functions (ζ,q) defined in ^± by ζ(x ∓ℓ,t) = k/2[ (ζ_±^c+ q^c) cos(kx-ω t) + (ζ_±^c- q^c) cos(kx+ω t) + (ζ^s+ q_±^s) sin(kx-ω t) + (ζ^s- q_±^s) sin(kx+ω t) ], q(x ∓ℓ,t) = ω/2[ (ζ_±^c+ q^c) cos(kx-ω t) - (ζ_±^c- q^c) cos(kx+ω t) + (ζ^s+ q_±^s) sin(kx-ω t) - (ζ^s- q_±^s) sin(kx+ω t) ], solve (<ref>)-(<ref>) with initial data ζ^ in(x ±ℓ) = k ζ_±^ccos(kx) + k ζ^ssin(kx), x ∈ℰ_± q^ in(x ±ℓ) = ω q^ccos(kx) + ω q_±^ssin(kx), x ∈ℰ_± and with q_ i(t)= ω[ q^c cos(ω t) - ζ^s sin(ω t) ]. If moreover q^c= -1/2 ℓα(0) k (q_+^s-q_-^s), ζ^s= 1/2 ℓα(0) k (ζ_+^c-ζ_-^c) then (<ref>) is also satisfied, with initial data q_ i^ in=ω q^c. In our numerical tests we chose 3κ^2 = 0.3 or 3κ^2 = 0.1, h_eq = 1-0.2, l = 1, k=2 and the size of the computational domain L = 10. The space steps Δ_x were computed as Δ_x = (L-l)/(N+1), with N=200,240,300,360,400 for the second-order scheme. The time step was computed as Δ_t = 0.9 Δ_x. To impose the exact solution on both left and right outer boundaries we use the wave generation method described in <ref>. The numerical results at final time T_f = 1 show a second-order convergence, see Figure <ref>. On Figure <ref> one can observe the shape of this exact solution, computed with N = 400 points. §.§.§ The nonlinear case In the absence of explicit solution in the nonlinear case, we use mesh-convergence to study the precision of our schemes. In this test the initial condition is the solitary wave described in <ref>, with ζ_ max = 0.2, centered at x = -15, at the left side of the fixed object. The size of the computational domain is L = 30. For this test the reference solution is computed with a very refined mesh: N = 2400. We chose ϵ = 0.3, 3κ^2 = 0.3 or 3κ^2 = 0.1, h_eq = 1-0.3, l = 4. The space steps Δ_x were computed as Δ_x = (L-l)/(N+1), with N=160,200,240,300,400 for the first-order scheme and N = 100,120,160 for the second-order scheme. The meshes are defined so that the points of the coarse meshes always coincide with the points of the very refined mesh of the reference solution. The time step was computed as Δ_t = 0.7 Δ_x. The numerical results at final time T_f = 20 computed with the first-order and the second-order schemes show respectively a first-order convergence for q_ i, see Figure <ref> and a second-order convergence, see Figure <ref>. On Figure <ref> one can observe the shape of the numerical solution, computed with N = 400 points. §.§ Waves interacting with a freely floating object In the general case, when the object moves under the influence of the waves and the waves are in return modified by the presence of the object, one has to consider the full augmented system presented in Proposition <ref>. We use exactly the same test as in <ref>, the only difference being that the object is now allowed to move vertically under the action of the waves, so that we also have to study the convergence of δ. The space steps Δ_x were computed as Δ_x = (L-l)/(N+1), with N=240,300,400 for the first-order scheme, N = 120,160,200,240 for the second-order scheme and 3κ^2 = 0.1, and N = 80,100,120,160 for the second-order scheme and 3κ^2 = 0.3. The numerical results at final time T_f = 20 computed with the first-order and the second-order schemes show respectively a first-order convergence, see Figures <ref>, <ref> and <ref>, and a second-order convergence, see Figures <ref>, <ref> and <ref>. On Figure <ref> one can observe the shape of the numerical solution, computed with N = 400 points. A comparison between Figure <ref> and Figure <ref> shows that the profiles of the reflected and transmitted waves differ. In particular, when the object is allowed to move, the reflected and transmitted wave are preceded by a depression wave that is not present when the object if fixed. § EXACT EXPRESSIONS OF THE QUANTITIES INVOLVED IN THE WAVE-STRUCTURE MODELS We have shown in <ref> that the wave-structure interaction equations can be formulated as a transmission problem between the two components ^-=(-∞,-ℓ) and ^+=(ℓ,∞) of the fluid domain, with transmission conditions involving the vertical displacement δ of the object and the average horizontal discharge q_ i under the object; we have also shown that, in the augmented formulation, δ and q_ i are found through the resolution of the 7-dimensional ODE (<ref>) on Θ:=(q_ i,δ̇,,,δ,,)^ T that we wrote in abstract form as d/dtΘ=𝒢(Θ, (R_1)_+,(R_1)_-,F_ ext), where 𝒢 is a smooth function of its arguments. The goal of this section is to derive the explicit expression of the mapping 𝒢 which is used for our numerical computations. We first provide in <ref> the explicit expression of various coefficients that appear in the wave-structure equations and then derive in <ref> the explicit expression of the mapping 𝒢 in the most general case. We then point out the simplifications that can be performed when the object is in fixed or forced motion (in <ref>) and when the system has a symmetry with respect to the vertical axis {x=0} (in <ref> ). N.B. We recall that for the sake of simplicity, we assume throughout this article that h_ eq(x) is an even function. §.§ Explicit expressions of some coefficients We did not made explicit in the main text of the article most of the constants that appear in the wave-structure interaction equations studied in this paper and derived in <cit.> because they were not relevant for the mathematical and numerical analysis of these equations. Of course, they are necessary for realistic simulations of wave-structure interactions and we provide them here. Let us first remind that the configuration under consideration is a floating object with vertical sidewalls located at x=±ℓ and that can only move in the vertical direction. In dimensionless variables, we denote by h_ eq(x) the water depth below the object at equilibrium and by δ(t) the displacement of the object at time t from its equilibrium position, so that that the water depth under the object at time t is h_ eq(x)+δ(t). The dimensionless mass m of the object can be defined through Archimedes' principle, m=1/2ℓ∫_-ℓ^ℓ (1-h_ eq) and the formulas below will also involve two scalar functions α(δ) and β(δ) defined as α(δ)=1/2ℓ∫_-ℓ^ℓ1/h_ eq(x)+δ dx β(δ)=1/21/2ℓ∫_-ℓ^ℓx^2/(h_ eq(x)+δ)^2 dx; the quantities τ_κ(δ)^2 that appears in Newton's equation is given by τ_κ(δ)^2=3κ^2 m +1/2ℓ∫_-ℓ^ℓx^2/h_ eq+δ dx+κ^2 1/h_ eq+δ. §.§ The general case As seen in (<ref>), the first four components of Θ satisfy ℳ[δ,] d/dt[ q_ i; δ̇; ; ] + [ 1/2ℓ; δ-; ; ] = 𝔔[δ,](q_ i,δ̇,) + [ 0; F_ ext; (R_1 )_+; (R_1 )_- ] where :=1/2(+), :=-, and ℳ[δ,] is the invertible matrix ℳ[δ,]:= ( [ α(δ) 0 κ^2/2 ℓ1/_+ - κ^2/2 ℓ1/_-; 0 τ_κ(δ)^2 - 1/2κ^2/_+ - 1/2κ^2/_-; -κ ℓκ κ^2 0; κ ℓκ 0 κ^2 ]), with _±=1+_±; simple computations also show that the quadratic term 𝔔[δ,] is of the form 𝔔[δ,]=( 𝔔_ i[δ,],𝔔_δ[δ,],𝔔_+[δ,],𝔔_-[δ,])^ T with (writing simply 𝔔_ i=𝔔_ i[δ,], etc.) 𝔔_ i(q_ i,δ̇,) =-α'(δ)q_ iδ̇-1/4ℓ[ ( -ℓδ̇+q_ i/_+)^2-(ℓδ̇+q_ i/_-)^2], 𝔔_δ(q_ i,δ̇,) = β(δ)δ̇^2+1/2α'(δ)q_ i^2+1/4[ ( -ℓδ̇+q_ i/_+)^2+(ℓδ̇+q_ i/_-)^2], 𝔔_+(q_ i,δ̇,) =1/2^2-1/_+(-ℓδ̇+q_ i)^2, 𝔔_-(q_ i,δ̇,) = 1/2^2-1/_-(ℓδ̇+q_ i)^2. The matrix ℳ is a 4× 4 matrix whose inverse is quite complicated; we therefore transform it into a block-triangular matrix by multiplying (<ref>) by the matrix ( [ 1 0 -1/2 ℓ1/_+ 1/2 ℓ1/_-; 0 1 1/21/_+ 1/21/_-; 0 0 1 0; 0 0 0 1 ]); the resulting equation takes the form ℳ[δ,] d/dt[ q_ i; δ̇; ; ] + [ 0; δ; ; ] = 𝔔(q_ i,δ̇,) + [ -1/2ℓ1/R_1; 1/R_1+F_ ext; (R_1 )_+; (R_1 )_- ] where the matrix ℳ[δ,] is block-triangular, ℳ[δ,]:= ( [ α(δ)+κ/ℓ1/ -κ/21/ 0 0; -κ/21/ τ_κ(δ)^2+κℓ1/ 0 0; -κ ℓκ κ^2 0; κ ℓκ 0 κ^2 ]), and the components 𝔔_ i, 𝔔_δ and 𝔔_± are given by 𝔔_ i(q_ i, δ̇,)=- 1/4ℓ1/^2 +1/4ℓ1/^2δ̇^2 +1/4ℓ1/^2q_ i^2 -(α'(δ)+1/^2)δ̇q_ i, 𝔔_δ(q_ i, δ̇,)= 1/21/^2 +(β(δ)-1/2ℓ^2 1/^2)δ̇^2 +1/2(α'(δ)-1/^2)q_ i^2 +1/2ℓ1/^2δ̇q_ i, and 𝔔_+(q_ i,δ̇,) = -1/2^2 - 1/_+q_ i^2 -ℓ^2 1/_+δ̇^2 +2ℓ1/_+q_ iδ̇, 𝔔_-(q_ i,δ̇,) = -1/2^2 - 1/_-q_ i^2 -ℓ^2 1/_-δ̇^2 -2ℓ1/_-q_ iδ̇. Since by definition of 𝒢 one has d/dt( q_ i, δ̇,, )^ T=𝒢_ I(Θ,(R_1)_±,F_ ext) with the notation 𝒢_ I:=( 𝒢_1, 𝒢_2,𝒢_3,𝒢_4)^ T; we deduce from (<ref>) that 𝒢_ I (Θ,(R_1)_±,F_ ext) =ℳ[δ,]^-1[ -[ 0; δ; ; ] + 𝔔(q_ i,δ̇,) + [ -1/2ℓ1/R_1; 1/R_1 +F_ ext; (R_1 )_+; (R_1 )_- ]], while 𝒢_5(Θ)=δ̇, 𝒢_6(Θ)=, 𝒢_7(Θ)=. For the numerical computations, we use the explicit expression for the inverse of the matrix ℳ[δ,], namely, ℳ[δ,]^ -1= ( [ -4/D(τ_κ(δ)^2+κℓ1/) -2κ/D1/ 0 0; -2κ/D1/ - 4/D(α(δ)+κ/ℓ1/) 0 0; -4/κ D(τ_κ(δ)^2+κℓ1/_-) 4/κ D( κ1/_-+ℓα(δ)) 1/κ^2 0; 4/κ D(τ_κ(δ)^2+κℓ1/_+) 4/κ D( κ1/_++ℓα(δ)) 0 1/κ^2 ]) with D=-4(α(δ)+κ/ℓ1/)×(τ_κ(δ)^2+κℓ1/)+κ^2 1/^2. §.§ The case of an object fixed or in forced motion When the object is fixed or in forced motion, the position of the center of mass is known and δ is therefore equal to some given function δ_ forced (δ_ forced≡ 0 if the solid is fixed). The ODE (<ref>) can be reduced to an ODE on ^5 instead of ^7. The variable Θ now stands for Θ:=(q_ i,,,,)^ T and (<ref>) can be simplified into ℳ_ forced[t,] d/dt[ q_ i; ; ] + [ 0; ; ] = 𝔔_ forced[t,](q_ i,) + [ -1/2ℓ1/R_1; (R_1 )_+; (R_1 )_- ] +[ 1/2κ1/; -ℓκ; -ℓκ ]δ̈_ forced, where the matrix ℳ_ forced[t,] is block-triangular, ℳ_ forced[t,]:= ( [ α(δ_ forced)+κ/ℓ1/ 0 0; -κ κ^2 0; κ 0 κ^2 ]), and 𝔔_ forced[t,](q_ i,):= [ 𝔔_ i[δ_ forced,](q_ i,δ̇_ forced,); 𝔔_+[δ_ forced,](q_ i,δ̇_ forced,); 𝔔_-[δ_ forced,](q_ i,δ̇_ forced,) ], and 𝔔_ i and 𝔔_± as in the previous section. We have made explicit the dependence of ℳ_ forced and 𝔔_ forced on the time variable t because δ_ forced is now an explicit function of time, and is now an non autonomous contribution to the ODE for Θ (except of course if the object is fixed, in which case δ_ forced≡ 0). With 𝒢_ I now being three dimensional (but with an extra dependence on t), 𝒢_ I:=( 𝒢_1, 𝒢_2,𝒢_3)^ T, we have therefore d/dt( q_ i,_+, _- )^ T=𝒢_ I(t,Θ,(R_1)_±) and 𝒢_ I (t,Θ,(R_1)_±) =ℳ_ forced[t,]^-1 × [ -[ 0; ; ] + 𝔔_ forced[t,](q_ i,) + [ -1/2ℓ1/R_1; (R_1 )_+; (R_1 )_- ] +[ 1/2κ1/; -ℓκ; -ℓκ ]δ̈_ forced], while 𝒢_4(Θ)=, 𝒢_5(Θ)= and M_ forced[t,]^-1=[ 1/α+κ/ℓ1/ 0 0; 1/κ1/α+κ/ℓ1/ 1/κ^2 0; -1/κ1/α+κ/ℓ1/ 0 1/κ^2 ]. The second component of (<ref>) (evolution equation on δ) does not appear any longer in (<ref>) but remains of course valid. It can be used to answer the following control problem: what external force should we apply to the object so that the vertical displacement of its center of gravity coincides with δ_ forced? The answer is explicitly given (writing 𝔔_δ=𝔔_δ[δ_ forced,], etc.) F_ ext= δ-1/hR_1-𝔔_δ(q_ i,δ̇_ forced,) -1/4D_ forced/α(δ_ forced)+κ/ℓ1/hδ̈_ forced -1/2κ/α(δ_ forced)+κ/ℓ1/h1/h[ 𝔔_ i(q_ i,δ̇_ forced,)-1/2ℓ1/hR_1], where D_ forced is deduced from the expression given for D in (<ref>) by substituting δ_ forced to δ. §.§ Simplifications in the symmetric case When the object is symmetric with respect to the vertical axis {x=0} (i.e. if h_ eq is an even function), as assumed throughout this article, it is possible to consider symmetric flows for which ζ is an even function, q is odd, and q_ i≡ 0 (such conditions are propagated by the equations from the initial data). This is for instance the case for waves generated by a floating object in a fluid initially at rest. By symmetry, the augmented transmission problem (<ref>)-(<ref>) reduces to an augmented initial boundary value problem on the half-line ^+=(ℓ,∞), ζ+ q=0, (1-κ^2^2) q +( 1/hq^2)+hζ=0 t>0, x∈ (ℓ,∞) with boundary condition q_|_x=ℓ=-ℓδ̇, where δ is a function of time determined by the first order ODE d/dtΘ=𝒢(Θ, (R_1)_+,F_ ext), with Θ:=(δ̇,,δ,)^ T and where 𝒢=(𝒢_1,𝒢_2,𝒢_3,𝒢_4)^ T is given by [ 𝒢_1; 𝒢_2 ] = ℳ_ sym[δ,]^-1[ -[ δ; ]+[ 𝔔_δ^ sym[δ,](δ̇,); 𝔔_+^ sym[δ,](δ̇,) ] + [ 1/h_+(R_1)_++F_ ext; (R_1)_+ ]] and 𝒢_3=δ̇, 𝒢_4=, and with ℳ_ sym[δ,]= [ τ_κ(δ)^2+κℓ1/h_+ 0; ℓκ κ^2 ], while 𝔔_δ^ sym and 𝔔_+^ sym are obtained by replacing q_ i=0 and = in the formula derived above for 𝔔_δ and 𝔔_δ. Two particular physical situations of particular interest fit into the symmetric framework and are investigated in this paper: * The return to equilibrium. An object is released from an out of equilibrium position in a fluid initially at rest. This situation is described by (<ref>)-(<ref>) with F_ ext=0 and initial conditions (ζ,q)_|_t=0=(0,0) and (δ̇,ζ̇_+,δ,ζ_+)_|_t=0=(0,0,δ^ in,0). * Wave generation. Waves are generated in a fluid initially at rest by moving the object up and down with a prescribed motion δ_ forced. The problem then reduces to an initial boundary value problem with boundary condition on q, namely, q_|_x=ℓ=g, with g=-ℓδ̇_ forced. This boundary data is explicitly given and does not require the resolution of a first order ODE as the other problems considered here. § ACKNOWLEDGMENT D. L. was partially supported by the grant ANR-18-CE40-0027-01 Singflows. 00 talbot_euler J. Abate, W. Ward, A Unified Framework for Numerically Inverting Laplace Transforms, INFORMS Journal of Computing 18(4) (2006) 408–421 ABZ T. Alazard, N. Burq, C. Zuily, Low regularity Cauchy theory for the water-waves problem: canals and swimming pools, Journées équations aux dérivées partielles (2011) 1–20. Babarit A. Babarit, Ocean wave energy conversion: resource, technologies and performance, Elsevier (2017). BeckLannes G. Beck, D. Lannes, Freely floating objects on a fluid governed by the Boussinesq equations, Ann. Inst. H. Poincaré Anal. Non Linéaire 39 (2022) 575–646. BianchiniPerrin R. Bianchini, C. Perrin, Soft congestion approximation to the one-dimensional constrained Euler equations, Nonlinearity 34(10) (2021) 6901. BEER U. Bosi, A. P. Engsig-Karup, C. Eskilsson, M. Ricchiuto, A spectral/hp element depth-integrated model for nonlinear wave–body interaction, Comput. Methods Appl. Mech. Eng. 348 (2019) 222–249. Bocchi E. Bocchi, Floating structures in shallow water: local well-posedness in the axisymmetric case, SIAM J. Math. Anal. 52 (2020) 306–339. Bocchi2 E. Bocchi, On the return to equilibrium problem for axisymmetric floating structures in shallow water, Nonlinearity 33 (2020) 3594. Bocchi3 E. Bocchi, J. He, G. Vergara-Hermosilla, Modelling and simulation of a wave energy converter, ESAIM Proc. 70 (2021) 68–83 BLM D. Bresch, D. Lannes, G. Métivier, Waves interacting with a partially immersed obstacle in the Boussinesq regime, Anal. PDE 14 (2021) 1085–1124. Cummins W. Cummins, The Impulse Response Function and Ship Motions, Report (David W. Taylor Model Basin), Navy Department, David Taylor Model Basin, 1962. DalibardPerrin A.-L. Dalibard, C. Perrin, Partially congested propagation fronts in one-dimensional Navier-Stokes equations, J. Elliptic Parabol Equ. 7 (2021) 491–507. EL A. P. Engsig‐Karup, W. L. Laskowski, An efficient p‐multigrid spectral element model for fully nonlinear water waves and fixed bodies, Int. J. Numer. Methods Fluids 93 (2021) 2823–2841. Uhaina A. Filippini, S. de Brye, V. Perrier, F. Marche, M. Ricchiuto, D. Lannes, P. Bonneton, UHAINA : A parallel high performance unstructured near-shore wave model, Journées Nationales Génie Côtier – Génie Civil, May 2018, La Rochelle, France. 47–56, GPSW1 E. Godlewski, M. Parisot, J. Sainte-Marie, F. Wahl, Congested shallow water model: roof modeling in free surface flow, ESAIM: Math. Model. Numer. Anal. 52 (2018) 1679–1707. GPSW2 E. Godlewski, M. Parisot, J. Sainte-Marie, F. Wahl, Congested shallow water model: on floating body, SMAI J. Comput. Math. 6 (2020) 227–251. GKC O. I. Gusev, G. S. Khakimzyanov, L. B. Chubarov, Numerical investigation of the wave force on a partially immersed rectangular structure: Long waves over a flat bottom, Ocean Eng. 221 (2021) 108540. Haidar A. Haidar, F. Marche, F. Vilar, A robust DG-ALE formulation for nonlinear shallow water interactions with a partially immersed object, preprint hal-03764650 (2022) IguchiLannes T. Iguchi, D. Lannes, Hyperbolic free boundary problems and applications to wave-structure interactions, Indiana Univ. Math. J. 70 (2021) 353–464. John1 F. John, On the motion of floating bodies. I, Commun. Pure Appl. Math. 2 (1949) 13–57. Karambas T. Karambas, E. Loukogeorgaki, A Boussinesq-type model for nonlinear wave-heaving cylinder interaction, Energies 15 (2022) 469. LannesModeling D. Lannes, Modeling shallow water waves, Nonlinearity 33 (2020) R1 LannesBressanone D. Lannes, Initial boundary value problems for hyperbolic systems, and dispersive perturbations, Lecture notes of the Bressanone Winter School, to appear Lannes_floatD. Lannes, On the dynamics of floating structures, Ann. PDE 3 (2017). LannesMetivier D. Lannes, G. Métivier, The shoreline problem for the one-dimensional shallow water and Green-Naghdi equations, J. Ec. Polytech. Math. 5 (2018) 455–518. LannesWeynans D. Lannes, L. Weynans, Generating boundary conditions for a Boussinesq system, Nonlinearity 33 (2020) 6868. Leveque R.J. LeVeque, Numerical Methods for Conservation Laws, Birkhauser-Verlag (1990). Maity D Maity, J San Martín, T Takahashi, M Tucsnak, Analysis of a simplified model of rigid structure floating in a viscous fluid, J. Nonlinear Sci. 29 (2019) 1975–2020. MingWang M. Ming, C. Wang, Water‐Waves Problem with Surface Tension in a Corner Domain II: The Local Well‐Posedness, Commun. Pure Appl. Math. 74 (2021) 225–285. MIS S. C. Mohapatra, H. Islam, C. Guedes Soares, Boussinesq model and CFD simulations of non-linear wave diffraction by a floating vertical cylinder, J. Mar. Sci. Eng. 8 (2020) 575 Parisot M. Parisot, Congested shallow water model: trapped air pockets modeling, preprint hal-03748169 (2022). PGR M. Penalba, G. Giorgi, J. Ringwood, Mathematical modelling of wave energy converters: A review of nonlinear approaches, Renew. Sustain. Ener. Rev. 78 (2017) 1188–1207. PerrinSaleh C. Perrin, K. Saleh, Numerical staggered schemes for the free-congested Navier-Stokes equations, SIAM J. Numer. Anal. 60 (2022) 1824–1852. Poyferre T. de Poyferré, A priori estimates for water waves with emerging bottom, Arch. Ration. Mech. Anal. 232 (2019) 763–812. Funwave F. Shi, J. T. Kirby, J. C. Harris, J. D. Geiman, S. T. Grilli, A high-order adaptive time-stepping tvd solver for Boussinesq modeling of breaking waves and coastal inundation, Ocean Model. 43-44 (2012) 36–51. SuTucsnak P. Su, M. Tucsnak, Shallow water waves generated by a floating object: a control theoretical perspective, Math. Control Relat. Fields 16 (2021). WeiKirbySinha G. Wei G, J. Kirby, A. Sinha, Generation of waves in Boussinesq models using a source function method, Coast. Eng. 36 (1999) 271–99.
http://arxiv.org/abs/2307.03289v1
20230706210026
A co-kurtosis PCA based dimensionality reduction with nonlinear reconstruction using neural networks
[ "Dibyajyoti Nayak", "Anirudh Jonnalagadda", "Uma Balakrishnan", "Hemanth Kolla", "Konduri Aditya" ]
physics.flu-dyn
[ "physics.flu-dyn", "physics.comp-ph" ]
a]Dibyajyoti Nayak a]Anirudh Jonnalagadda b]Uma Balakrishnan b]Hemanth Kolla a]Konduri Adityacor1 konduriadi@iisc.ac.in [cor1]Corresponding Author [a]Department of Computational and Data Sciences, Indian Institute of Science, Bangalore, India [b]Sandia National Laboratories, Livermore, California, USA For turbulent reacting flows, identification of low-dimensional representations of the thermo-chemical state space is vitally important, primarily to significantly reduce the computational cost of device-scale simulations. Principal component analysis (PCA), and its variants, is a widely employed class of methods. Recently, an alternative technique that focuses on higher-order statistical interactions, co-kurtosis PCA (CoK-PCA), has been shown to effectively provide a low-dimensional representation by capturing the stiff chemical dynamics associated with spatiotemporally localized reaction zones. While its effectiveness has only been demonstrated based on a priori analysis with linear reconstruction, in this work, we employ nonlinear techniques to reconstruct the full thermo-chemical state and evaluate the efficacy of CoK-PCA compared to PCA. Specifically, we combine a CoK-PCA/PCA based dimensionality reduction (encoding) with an artificial neural network (ANN) based reconstruction (decoding) and examine a priori the reconstruction errors of the thermo-chemical state. In addition, we evaluate the errors in species production rates and heat release rates that are nonlinear functions of the reconstructed state as a measure of the overall accuracy of the dimensionality reduction technique. We employ four datasets to assess CoK-PCA/PCA coupled with ANN-based reconstruction: a homogeneous reactor for autoignition of an ethylene/air mixture that has conventional single-stage ignition kinetics, a dimethyl ether (DME)/air mixture which has two-stage ignition kinetics, a one-dimensional freely propagating premixed ethylene/air laminar flame, and a two-dimensional homogeneous charge compression ignition of ethanol. The analyses demonstrate the robustness of the CoK-PCA based low-dimensional manifold with ANN reconstruction in accurately capturing the data, specifically from the reaction zones. Dimensionality reduction Principal component analysis Co-kurtosis tensor Deep neural networks Reconstruction § INTRODUCTION The multi-scale, multi-physics nature of turbulent reacting flows necessitate the use of high-fidelity simulations to accurately model chemical kinetics and turbulence-chemistry interactions. When representing chemical kinetics using first principles, e.g., direct numerical simulations with detailed kinetics, the governing system of equations has large dimensionality due to tens of chemical species participating in hundreds of chemical reactions <cit.>. As a result, the computational costs become prohibitively expensive for simulations of practical device-scale problems. Indeed, as the chemistry calculations associated with even the simplest of reaction mechanisms present themselves as the main driver of the large computational cost <cit.>, reduced order modeling techniques become invaluable. With the advent of data-driven techniques, lower-dimensional manifold (LDM) representations of the thermo-chemical subspace, identified from relevant training data, can effectively model the species dynamics of an otherwise large chemical system. Among the various available strategies to obtain these LDMs, principal component analysis (PCA) and its many flavors have been most widely employed <cit.>. However, the principal components obtained by PCA are optimized with respect to second-order joint statistical moment, covariance, of the training data and may not be sensitive to the presence of extreme-valued samples characteristic of localized spatiotemporal events such as the formation of ignition kernels <cit.>. In contrast, the statistical signature of such events is shown to be favorably captured by principal components of higher-order joint statistical moments, specifically the fourth-order co-kurtosis tensor <cit.>. Building upon this observation, a dimensionality reduction procedure that constructs LDMs represented by principal components of the co-kurtosis tensor, namely the co-kurtosis PCA (CoK-PCA) method, was proposed <cit.>. Additionally, analogous to PCA, a recently proposed online low-rank approximation algorithm known as dynamically bi-orthogonal decomposition (DBO), which is based on time-dependent low-dimensional subspaces, has been shown to effectively characterize strongly transient events in turbulent compressible reacting flows <cit.>. It is noteworthy that, while the CoK-PCA method was shown to represent the thermo-chemical state as well as nonlinear functions of the thermo-chemical state, such as species production rates (PRs) and heat release rates (HRRs), better than PCA in the localized spatiotemporal regions corresponding to strong chemical activity, the transformation from the principal components of the LDM to the full thermo-chemical state was performed through linear operators. However, due to the inherent nonlinear nature of the combustion phenomenon, the use of linear reconstruction has long been known not to be sufficiently accurate. Thus, the main objective of the present study is to address these concerns by studying the CoK-PCA method with nonlinear reconstruction techniques and comparing the accuracy relative to both PCA and a simple linear reconstruction. For PCA-based LDMs, several studies have explored nonlinear reconstruction techniques such as artificial neural networks (ANNs), kernel methods, Gaussian process regression (GPR), and their hybrid approaches <cit.>. Nonlinear reconstruction using ANN models provides flexibility to capture complex relationships, scalability for large datasets, meaningful representation learning, robustness to noise and irregularities, and the ability to generalize well to unseen data <cit.>. Therefore, within the confines of this paper, our primary emphasis is directed toward nonlinear reconstruction utilizing ANN. In this study, we will compare the reconstruction performance of ANNs with linear methods <cit.> for thermo-chemical scalars, species production rates, and heat release rates. By contrasting the outcomes of ANN-based reconstruction with those achieved through linear techniques <cit.>, we aim to evaluate the efficacy and superiority of nonlinear approaches in accurately capturing and predicting these important combustion variables. Following Jonnalagadda et al. <cit.>, the quality of the CoK-PCA-based/PCA-based encoder and ANN-based decoder models, hereafter called the CoK-PCA-ANN and PCA-ANN models, respectively, will be compared via the conventionally considered reconstruction errors of the thermo-chemical scalars as well as more sensitive PRs and HRRs for four combustion datasets namely premixed ethylene-air in a homogenous reactor, two-stage autoignition of dimethyl ether (DME)-air, a one-dimensional freely-propagating laminar flame of premixed ethylene-air, and a homogeneous charge compression ignition of ethanol-air mixture. The remainder of this paper is organized as follows. In Sec. <ref>, we briefly illustrate the dimensionality reduction procedure and outline the PCA and the CoK-PCA methods to obtain low-dimensional manifolds (LDMs). Section <ref> describes the artificial neural network (ANN) based nonlinear reconstruction procedure to predict the thermo-chemical scalars from principal components of the LDMs. The results from a priori analysis to evaluate the performance of the two LDMs based on ANN reconstruction are presented in Sec. <ref>. Finally, we summarize the paper and provide future directions in Sec. <ref>. § DIMENSIONALITY REDUCTION Following convention, we arrange the scaled training data as a matrix 𝐗 ∈ R^(n_g× n_v) with n_g observations (e.g., spatial locations, temporal checkpoints) each having n_v real-valued variables or features (e.g., species concentrations, temperature). With respect to the feature space, 𝐗 can be represented in terms of column vectors as 𝐗 = { x_i ∈R^(n_g× 1) ∀ i ∈{ 1, ⋯, n_v}}. The purpose of dimensionality reduction, within the context of combustion, is to find a column subspace of dimension n_q < n_v, representing an LDM of the feature space by some measure of optimality. Note that dimensionality reduction could also denote techniques that seek an optimal row subspace, which reduces the size of n_g, but our interest here is strictly on reducing n_v. §.§ Principal component analysis (PCA) based low-dimensional manifold For PCA, the principal vectors align in the directions of maximal variance as captured by the second order data covariance matrix, 𝐂∈R^(n_v × n_v), represented using index notation as: (𝐂)_ij≡ C_ij = E(x_i x_j), i,j ∈{ 1, ⋯, n_v}, where E is the expectation operator. The required principal vectors (𝐀) are the eigenvectors of the covariance matrix obtained through an eigenvalue decomposition, 𝐂 = 𝐀𝐋𝐀^T. It should be noted that the data used in the definition of joint moments is assumed to be centered around the mean. §.§ Co-kurtosis tensor based low-dimensinal manifold Similarly, with the higher order moment of interest, i.e., the fourth-order co-kurtosis tensor, the principal vectors represent the directions of maximal kurtosis in the data. The co-kurtosis tensor is defined as: T_ijkl = E(x_i x_j x_k x_l), i,j,k,l ∈{ 1, ⋯, n_v} By drawing an analogy to independent component analysis (ICA) <cit.>, for a non-Gaussian data distribution, the fourth-order cumulant tensor, i.e., co-kurtosis 𝐊 is computed by subtracting the excess variance given as: K_ijkl = T_ijkl - C_ij C_kl - C_ik C_jl- C_il C_jk Again note that as the data is centered around the mean, only the second moment terms appear in the evaluation of the cumulant tensor. The next step involves a suitable decomposition of the co-kurtosis tensor 𝐊 to obtain the required principal components. Directly computing the higher-order joint moment tensors is expensive due to the curse of dimensionality, i.e., in our case for the co-kurtosis tensor, computational complexity would be n_v^4 where n_v is the number of features. The symmetric nature of the co-kurtosis tensor can be leveraged to result in roughly half of n_v^4 computations. However, the existing well-defined matrix decomposition techniques cannot be directly extended to higher-order tensors. Therefore, alternate tensor decomposition methods, such as symmetric canonical polyadic (CP), higher order singular value decomposition (HOSVD), etc., should be explored to obtain the principal kurtosis vectors and values. Following <cit.> and <cit.>, Aditya et al. <cit.> showed that the cumulant tensor 𝐊 could be reshaped into a n_v × n_v^3 matrix 𝐓 following which the principal vectors 𝐔 are determined from the SVD of 𝐓 = 𝐔𝐒𝐕^T. After obtaining the principal components, we can reduce the dimensionality of the original data by projecting it onto a low-dimensional manifold. This is typically performed by selecting the most informative subset of principal vectors to project 𝐗∈R^(n_g × n_v) onto the reduced space represented as 𝐙_q ∈R^(n_g × n_q), where n_q (<n_v) corresponds to the number of principal vectors retained. The conventional forward projection procedure in PCA employs a simple matrix transformation, 𝐙_q = 𝐗𝐀_q, where 𝐀_q ∈R^(n_v × n_q) represents the truncated subset of principal vectors (eigenvectors of the covariance matrix). For CoK-PCA, we obtain 𝐀_q as the n_q leading left singular vectors of 𝐔 as described above. The contrast between PCA and CoK-PCA has been illustrated using a synthetic bivariate dataset with a few extreme-valued samples collectively representing anomalous events <cit.>. It was observed that while the first PCA principal vector aligned in the direction of maximal variance, the first CoK-PCA principal vector aligned itself in the direction of the anomalous cluster, supporting the hypothesis that CoK-PCA is more sensitive to extreme-valued samples than PCA. § RECONSTRUCTION METHODOLOGY To assess the quality of the reduced manifold, we need to evaluate the reconstruction accuracy of the original state space from the low-dimensional subspace. Note that errors in the reconstructed variables are incurred at two stages: while projecting data into the low-dimensional space and during the reconstruction. §.§ Linear reconstruction The standard procedure of obtaining the original thermo-chemical state is a linear reconstruction through a matrix inversion, given as: 𝐗_q = 𝐙_q𝐀_q^T, where 𝐗_q denotes the reconstructed data in the original feature space. Now, a comparison between 𝐗_q and 𝐗 would provide a quantitative measure of the quality of the two reduced manifolds obtained by CoK-PCA and PCA, respectively. Jonnalagadda et al. <cit.> analyzed the maximum and average values of the absolute reconstruction error (ε = |𝐗 - 𝐗_q|), ε_m = max(ε) and ε_a = mean(ε), respectively to quantify the accuracy in each reconstructed variable. Specifically, they examined the error ratio, r_i = ln{ε_i^PCA/ε_i^CoK-PCA}, to analyze the performance of CoK-PCA relative to PCA; the subscript i can represent either the maximum (r_m) or average (r_a) errors. §.§ Nonlinear reconstruction through ANNs It is clear that while CoK-PCA exhibits improved accuracy in capturing stiff dynamics compared to PCA <cit.>, both methods incur significant errors while employing a linear reconstruction of the original thermo-chemical state from the reduced manifold, particularly for an aggressive truncation (low n_q). Therefore, to fully establish the efficacy of CoK-PCA relative to PCA in capturing stiff dynamics, it is imperative to investigate its efficacy coupled with a nonlinear reconstruction approach. In this paper, we employ fully-connected deep neural networks to accomplish the required nonlinear reconstruction task. Since strong dependencies or relationships exist between different thermo-chemical scalars, it is appropriate to consider a fully-connected network where every subsequent layer is fully connected with the previous layer, ensuring the flow of information (of dependencies) across the network. In this regard, we also hypothesize that the use of a skip connection, i.e., introducing a sort of regularisation in deeper networks by skipping some of the layer outputs during backpropagation, would not be suitable. However, it should be noted that using artificial neural networks (ANNs) is an intuitive choice. Alternate nonlinear regression methods, such as Gaussian process regression (GPR), polynomial regression, least squares, etc., exist and can be incorporated in a similar manner as described in this study. With significant advancements in deep learning in recent times, ANNs have proven their potential to model highly complex nonlinear relationships between any set of inputs and outputs. The goal of an ANN or, specifically, a deep feedforward neural network is to approximate some underlying function f^*. For example, for a classifier, 𝐲 = f^*(𝐱) maps an input 𝐱 to a category 𝐲, but more generally in case of regression problems 𝐱 is a vector of real numbers and 𝐲 output of a vector-valued function. A feedforward network defines a mapping 𝐲 = f(𝐱; θ) and learns the value of the parameters θ that result in the best function approximation. The nonlinear reconstruction step in a dimensionality reduction algorithm can be viewed as a nonlinear mapping from the reduced manifold (or input PCs) to the original feature space (or output features). We leverage the property of ANNs being universal function approximators <cit.> to achieve this task. Consider a reduced data representation of the original state space 𝐗 given by the score matrix, 𝐙_q = 𝐗𝐀_q, where 𝐀_q ∈R^(n_v × n_q) comprises the chosen subset of principal vectors (kurtosis or variance). Now, the objective is to use an ANN to predict (or reconstruct) 𝐗_q from 𝐙_q where 𝐗_q represents the reconstructed data in the original feature space, which is as close to 𝐗 as possible. This is a supervised learning problem where for every k^th feature vector from (k^th row of) the design matrix 𝐙_q, z_k*∈R^n_q, the network should accurately predict the target vector (k^th row of 𝐗) x_k*∈R^n_v, i.e., the ANN should provide the mapping z_k*↦ x_k*,   ∀ k ∈{1,2, ⋯ ,n_g}. In other words, the goal of training a neural network is to drive its prediction 𝐗_q to match 𝐗. Since it is a regression problem, we evaluate the performance or accuracy of the model by using a mean squared error (MSE) loss defined as: ℒ_MSE = 1m∑_k=1^m (x̂_k* - x_k*)^2 where x̂_k*, x_k*, and m are the model prediction, ground truth, and the number of samples, respectively. Note that m can differ from n_g depending on how the entire dataset is split into training and test sets. ANNs or feedforward networks are typically represented by composing together many different functions <cit.>. The model can be viewed as a directed acyclic graph describing how the functions are composed. For example, we might have three different functions f^(1), f^(2), and f^(3) connected in a chain, to form f(x) = f^(3)(f^(2)(f^(1)(x))). These chain-like structures form the foundation of neural networks. Each function f^(i) corresponds to a hidden layer of the network. The overall length of the chain gives the depth of the network. The final layer of the network is the output layer. Each hidden layer of the network is generally vector-valued. Every vector element can be interpreted as playing a role analogous to that of a neuron. The dimensionality of the hidden layers (number of neurons) determines the width of the layer. In other words, a layer can be viewed as consisting of many units (neurons) that act in parallel, each representing a vector-to-scalar function. Each connection to a unit in a hidden layer is associated with a weight w and a bias b. These weights and biases parameterize the function f^(i) for each hidden layer. The simplest feedforward neural network computes the output of a unit by a linear combination of all weights and biases associated with it. After that, a nonlinear activation acts on this output and is responsible for inducing the required nonlinearity in the network approximation. Commonly used nonlinear activations are sigmoid, hyperbolic tangent (Tanh), rectified linear unit (ReLU), Leaky ReLU, etc. From our numerous experiments, we found that the usage of Tanh provides better stability, robustness, and smoother training of the network than ReLU, effectively handles vanishing gradients, and exhibits minimal sensitivity to different random seeds. Additionally, Tanh can map inputs to spaces with both positive and negative values, unlike ReLU and sigmoid. Thus, in this study, we employ Tanh in the hidden layers. Further, we use a custom sigmoid-based activation function at the output layer to ensure the model predictions lie within the same limits as the original state. Since this is a non-convex optimization problem, gradient descent-based methods are generally used to iteratively converge to the optimal solution. For our study, we use the popular Adam optimization algorithm, which is a variant of stochastic gradient descent (SGD) that realizes the benefits of two other SGD algorithms: adaptive gradient algorithm (AdaGrad) and root mean square propagation (RMSProp). Instead of using a single learning rate as in SGD, Adam computes individual adaptive learning rates for different parameters from estimates of the first and second moments of the gradients. In this case, we control the learning rate so that there is minimum oscillation when it reaches the global minimum while taking big enough steps to pass the local minimum hurdles. This method is particularly efficient for larger problem sizes involving more data or parameters. Moreover, it requires relatively lesser memory for the training procedure. The number of epochs in the training process is also selected carefully to ensure convergence. Neural network training is inherently stochastic as it involves a random initialization of the parameters (weights and biases) at the start of the optimization. Also, the non-convexity of the loss function might result in the algorithm converging to a local minimum among multiple local minima according to a specific value of initial weights and biases. This manifests in the keras random seed we set in our code. If the network is robust, this generally does not affect the network predictions much. Nevertheless, in this work, we employ techniques such as stochastic weight averaging, model averaging, and ensemble averaging in the network training phase to mitigate these issues and ensure consistency in model predictions. §.§ Error metrics Once trained, the network is used to predict the thermo-chemical scalars, which include species mass fractions and temperature. The species production rates and heat release rate are also computed based on this reconstructed thermo-chemical scalars. The motivation behind calculating the species production rates and heat release rate is their nonlinear dependence on the species mass fractions and temperature, which provides a more stringent metric for assessing the reconstruction accuracy of the full thermo-chemical state and the overall dimensionality reduction strategy. Further, apart from having a tangible physical meaning, the reconstruction error associated with the heat release rate also provides an overall assessment of the quality of the reduced manifold since the heat release rate represents an aggregate effect of all the quantities of interest. A key point to note is that the network predictions correspond to a scaled version of the original state since the network is trained with scaled input feature vectors. Hence, we suitably unscale the network outputs before calculating the errors in the reconstruction of thermo-chemical scalars. Analogous to the error metrics in <cit.>, we examine the following error ratios, r_i = ln{ε_i^CoK-PCA/ε_i^CoK-PCA-ANN}, r_i = ln{ε_i^PCA/ε_i^PCA-ANN}, r_i = ln{ε_i^PCA-ANN/ε_i^CoK-PCA-ANN}, to compare the relative performance of different methods such as CoK-PCA, PCA, CoK-PCA-ANN, and PCA-ANN considered in our study. Again, the subscript i can represent either the maximum (m) or average (a) errors. The value of r_i will be positive if the ratio inside the logarithm is greater than unity (the error in the denominator is lower), indicating that the technique represented by the denominator is more accurate than that represented by the numerator. In the results to be shown, following <cit.>, we will denote positive r_i by blue and negative by brown colored bars. § RESULTS To investigate the accuracy of the proposed reconstruction methodology for combustion datasets, we consider four test cases representative of various physical and chemical phenomena (e.g., autoignition, flame propagation) ubiquitous in such scenarios: * autoignition of a premixed ethylene/air mixture in a homogeneous reactor, * autoignition, with two-stage ignition kinetics, of a dimethyl ether (DME)/air mixture in a homogeneous reactor, * one-dimensional freely propagating planar laminar premixed flame of ethylene/air mixture, * two-dimensional turbulent autoignition of ethanol/air at homogeneous charge compression ignition (HCCI) conditions. The datasets represent an increasing order of complexity of chemical kinetics and flow-chemistry interactions. The first two cases represent homogeneous (spatially zero-dimensional) autoignition, albeit ethylene/air with conventional ignition kinetics, while DME/air has more complex low and high temperature ignition kinetics. The third case incorporates spatial variation, including convection and diffusion effects in the canonical planar laminar premixed flame configuration. The fourth case represents complex turbulence-chemistry interactions in a spatially 2-D configuration under conditions relevant to practical devices. §.§ Premixed ethylene-air in a homogeneous reactor In this section, we consider the dataset that characterizes spontaneous ignition in a simple homogeneous (zero-dimensional) reactor. For dataset generation, we simulate a constant pressure reactor with a premixed ethylene-air mixture at a pressure P = 1.72atm for a suite of nine flamelets, i.e., D_i ∀ i ∈{ 1, 2, ⋯, 9 }, each with a different initial temperature (T) and equivalence ratio (ϕ) as illustrated in Fig. <ref>. Specifically, we perturb the initial conditions (T, ϕ) from a reference state of D_1 ≡ (T = 1200K, ϕ = 0.4) by Δ T = ±50K and Δϕ = ± 0.25. Thus, each flamelet is parameterized by a combination of initial (T, ϕ) where T ∈{1150K, 1200K, 1250K} and ϕ ∈{ 0.375, 0.4, 0.425 }. The chemistry is represented by a 32-species, 206-reactions mechanism <cit.>. The homogeneous reactor simulations are performed with Cantera <cit.>, and each flamelet is computed for different durations to ensure that the profiles remain nearly similar. For the reference state, the reactor is evolved for 2.5 with a time step of 1 to yield 2501 data samples. Hence, in this case, the original design matrix D consists of n_g = 2501 points and n_v = 33 variables, comprising 32 species and temperature. The next step involves a data preprocessing stage where the design matrix for each state is zero-centered by subtracting with the mean feature vector and normalized with the absolute maximum feature vector to obtain the scaled data matrix, 𝐗. This ensures an unbiased data representation with equal weightage given to all the features. To generate the low-dimensional manifolds, i.e., using PCA and CoK-PCA, we compute the principal vectors and values based on the scaled reference state (X_1), which eventually forms the basis for constructing the training/validation data. Next, we perform an aggressive truncation of the reduced manifolds by retaining n_q = 5 dominant principal vectors out of the n_v = 33 vectors that capture approximately 99% of the variance and 98% of the kurtosis in the dataset, respectively. Using the principal vectors computed on the scaled reference state (X_1), we obtain the LDM representation (score matrices) 𝐙_q^4 and 𝐙_q^2 through the dimensionality reduction procedure discussed in Sec. <ref> for the CoK-PCA and PCA reduced manifolds, respectively. It should be noted that this projection is a linear operation. After obtaining the LDMs with PCA and CoK-PCA, the next step in the a priori analysis is to evaluate the reduced manifolds in conjunction with the nonlinear reconstruction of the original thermo-chemical state through ANNs. For the ANN training phase, the input feature vectors are the rows of the score matrices (𝐙^4_q, 𝐙^2_q) and output vectors are the corresponding rows of the scaled original thermo-chemical state matrix 𝐗; these matrices are arranged based on the different flamelets (D_js) using train-test split shown in Fig. <ref>, i.e., flamelets D_1, D_2, D_4, D_6, and D_8 are used for ANN training only. Through hyperparameter tuning, the best network architecture is ascertained with four hidden layers of widths of 40, 64, 40, and 32 neurons, respectively. In addition, the widths of input and output layers correspond to n_q = 5 and n_v = 33 neurons, respectively. Further, we use a hyperbolic tangent activation in the hidden layers and a custom sigmoid-based activation at the output layer, which ensures the network predictions are bounded in the same limits as the scaled inputs. Finally, we employ the widely used Adam optimizer (learning rate = 1e-3) to facilitate robust, stable, fast network learning. Figure <ref> shows the loss curves obtained for CoK-PCA-ANN and PCA-ANN, where convergence is achieved at around 100 epochs with a validation loss of about 2e-5. Having trained on a subset of the flamelets, we use the neural network to predict (or reconstruct) the scaled species mass fractions and temperature for the test states, i.e., D_j ∀ j ∈{ 3,5,7,9 }. To ensure that the reconstructed thermo-chemical state results in a unit sum of species mass fractions, as is the standard practice, all reconstructed species mass fractions which yield negative values (that are slightly smaller than zero) are taken to be zero, after which any deviation from the sum equalling unity is adjusted for in the non-participating or bath species. Using the reconstructed thermo-chemical scalars, 𝐃_𝐪, we proceed to compute the species production rates and heat release rates. The reconstructed quantities are compared against the original thermo-chemical state, 𝐃, and their derived quantities (species production rates, heat release rates) using the error metrics, r_m and r_a. In Fig. <ref>, we compare error ratios of linear and ANN reconstruction (Eqs. <ref> and <ref>) of thermo-chemical scalars for both the dimensionality reduction methods. N_2 being an inert species has not been included here. For most variables, ANN reconstruction outperforms linear reconstruction (demonstrated by blue bars) with respect to the average (r_a) and maximum (r_m) error metrics. An exception is temperature, where linear reconstruction performs marginally better in terms of r_m (demonstrated by brown bars). This observation is consistent for both methods, i.e., PCA and CoK-PCA. Not surprisingly, as shown in Fig. <ref>, the errors in species production rates and heat release rate, computed from the reconstructed thermo-chemical state, are significantly lower with ANN reconstruction compared with linear reconstruction. In general, as n_q increases, the accuracy improvements obtained with ANN in comparison to linear reconstruction decrease as the reduced manifold becomes an increasingly better linear approximation of the original state; in the limit of n_q = n_v linear reconstruction is exact, which is a scenario with no reduction in dimensionality. As dimensionality needs to be reduced as aggressively as possible, one can conclude that ANN is better suited for reconstructing data from low-dimensional manifolds. Next, we compare the two dimensionality reduction techniques against each other, both with ANN reconstruction. Figure <ref> shows the error ratios for PCA-ANN vs. CoK-PCA-ANN (Eq. <ref>) in reconstructing thermo-chemical scalars (left), and species production rates and heat release rates (right). For the scalars, it can be clearly seen that CoK-PCA-ANN outperforms PCA-ANN in predictions of 25 and 21 (out of 33) variables for r_m and r_a metrics, respectively. The trend becomes more prominent in the case of species production rates and heat release rates where CoK-PCA-ANN predicts production rates more accurately for 23 out of the 32 species with the r_a metric and 24 out of the 32 species with the r_m metric. Notably, CoK-PCA-ANN captures heat release rate better than PCA-ANN in terms of both the error metrics. While r_m and r_a are global error metrics, it is instructive to examine the temporal distribution of reconstruction errors and determine whether the errors are low/high in the unburnt, igniting, or fully burnt portions of the flame. Figure <ref> presents the absolute reconstruction error of heat release rate plotted against time for the four test flamelets: D_3, D_5, D_7, and D_9. For reference, the progress variable is plotted on the right y-axis of each figure. Both methods incur significant error in the reaction zones, with the peak at intermediate values of the progress variable, which occurs at 0.8, 0.4, 1, and 1.9 for D_3, D_5, D_7, and D_9, respectively. As expected, the error is much lower on the unburnt and the fully burnt portions. Further, for D_3 and D_9, CoK-PCA-ANN incurs a significantly lower peak reconstruction error than PCA-ANN (demonstrated by the blue peaks smaller in magnitude than the red peaks), which is reflected in the r_m error presented in Fig. <ref> (d). However, the peak error for D_7 is higher for CoK-PCA-ANN. For D_5, both the methods incur essentially the same magnitude of errors and perform at par with each other. Nonetheless, across the four test flamelets, CoK-PCA-ANN yields an overall smaller average reconstruction error than PCA-ANN, as reflected in the r_a error presented in Fig. <ref> (c). These comparisons provide further evidence that the proposed CoK-PCA-ANN method predicts the overall chemical kinetics in the reaction zone better than PCA-ANN. §.§ Two-stage autoignition of dimethyl ether-air mixture In contrast to ethylene, which has conventional single-stage ignition chemistry, a class of hydrocarbon fuels characterized by more complex two-stage ignition (a low-temperature and a high-temperature) chemistry are increasingly considered suitable for novel combustion concepts such as homogeneous charge compression ignition (HCCI) <cit.>. HCCI relies on volumetric autoignition of a (nearly) homogeneous fuel charge and realizes the benefits of low emissions due to fuel-lean combustion while also achieving high efficiencies. However, controlling the ignition timing is the biggest challenge since the charge ignites spontaneously due to compression heating. Consequently, modeling the ignition processes of two-stage ignition fuels under engine-relevant conditions is an open challenge. Dimethyl ether (DME) is a prominent example, and its ignition behavior resulting from turbulence-chemistry interactions at engine-relevant conditions has been widely studied using DNS <cit.>. From a dimensionality reduction perspective, DME ignition presents distinct challenges from that of ethylene; the chemical pathways and the participating chemical species for the low-temperature ignition chemistry are different from high-temperature chemistry. This motivates us to test the capability of CoK-PCA-ANN in reconstructing the original state space from the reduced manifold for the two-stage ignition of DME. We consider a constant pressure zero-dimensional homogeneous reactor of a stoichiometric mixture of hydrogen-enriched DME fuel and air. The ratio of hydrogen to DME is 3:2 in the fuel mixture, similar to that in <cit.>. The initial pressure is 1 atm while the initial temperature is varied from 600K to 800K in increments of 25K, for a total of nine flames. This range of initial temperatures is such that the flames contain both two-stage as well as single-stage ignition behavior. Finite rate chemistry is specified using the 39-species, 175-reactions skeletal mechanism developed in <cit.>, and the flames are simulated with Cantera <cit.> for a duration of 1 with a fixed time step of 0.1. In this case, the original design matrix 𝐃 consists of n_g = 10001 points and n_v = 40 variables, comprising 39 species and temperature. Traditional dimensionality reduction techniques, such as PCA or linear regression, may not effectively capture the nonlinear interactions present in the data. The data associated with the two-stage ignition of DME is high-dimensional and contains intricate patterns. This includes time-dependent or transient behavior, multiple ignition modes, and variations under different operating conditions. This complexity makes it difficult to find a low-dimensional representation that captures the essential information while discarding irrelevant or redundant features. The reconstruction of two-stage ignition using CoK-PCA-ANN offers several benefits. It enables a deeper understanding of DME combustion, facilitates the development of more accurate ignition models, and provides valuable insights for optimizing combustion strategies. This approach aids in reducing data dimensionality by extracting pertinent features and eliminating redundant information, thereby improving computational efficiency while maintaining prediction accuracy. CoK-PCA and PCA are performed using the data of all nine flames, and dimensionality is reduced to n_q=5. To train the ANNs for reconstructing the full thermo-chemical state from the reduced state, the data is split into training and testing sets, with five flames (initial temperatures of 600 K, 650 K, 700 K, 750 K, 800 K) comprising the former, and the rest, the latter. We randomly shuffle the training dataset and set aside 20 % for the validation process. After conducting hyperparameter tuning, the network architecture is determined with two hidden layers comprising 10 and 20 neurons, respectively. The input and output layers have a width of n_q = 5 and n_v = 40 neurons, respectively. A hyperbolic tangent activation function for the hidden layers, a custom sigmoid-based activation function for the output layer, and the Adam optimizer are used as before. Figure <ref> shows the training and validation loss for the PCA-ANN and CoK-PCA-ANN. It is evident that the validation loss remains consistently only slightly higher than the training loss (∼ 2.5e-4) for a significant number of epochs (200-500), and the model has converged. We employ early stopping to achieve this convergence, thereby saving computational resources and preventing overfitting. This indicates that the model is generalizing well to unseen data. Despite the slight difference in loss, the model demonstrates robustness and reliability in its predictions. This suggests that the model has learned intricate patterns present in the two-stage ignition dataset and features from the training data that allow it to make accurate predictions on new examples, resulting in a reliable and effective model. The error ratios in thermo-chemical scalars, species production rates, and heat release rates were computed using equations (<ref>) - (<ref>) and visualized in Figures <ref>, <ref>, and <ref>. Notably, the exclusion of N_2 as an inert species was not considered in this analysis. The results demonstrate that the overall nonlinear reconstruction employing ANN (blue bars) exhibits lower error compared to linear reconstruction (brown bars) across most species, temperature, production rates, and heat release rates for both PCA and CoK-PCA methods (Figures <ref>, <ref>). Figure <ref> illustrates the error ratio between PCA-ANN and CoK-PCA-ANN for thermo-chemical scalars (<ref> (a) and (b)), species production rates, and heat release rates (<ref> (c) and (d)). The errors in reconstructed thermo-chemical scalars show mixed trends, unlike the ethylene-air dataset for which CoK-PCA-ANN was consistently more accurate than PCA-ANN. However, the accuracy of species production rates and, more importantly, the heat release rate for CoK-PCA-ANN is better than PCA-ANN. This result reinforces the notion that error metrics based only on thermo-chemical state reconstruction may not be sufficient measures of accuracy. Going beyond the error ratio, and similar to the ethylene-air case, we plot the absolute errors of heat release rate for one of the DME-air flames from the test set with an initial temperature of 625 K as shown in Fig. <ref>. Since this mixture has two-stage ignition, the heat release rate for the second stage (at ∼ 0.25 ms) is orders of magnitude larger than the first stage (at ∼ 0.047 ms). To make the comparison clearer, insets in Fig. <ref> show the regions zoomed on the two stages. It is evident that the absolute errors of heat release rate are greater by up to an order of magnitude with linear reconstruction (Fig. <ref> (a)) compared with ANN-based reconstruction (Fig. <ref> (b)). Moreover, while the errors for the first stage are comparable between PCA-ANN and CoK-PCA-ANN, for the second stage, CoK-PCA-ANN is more accurate. §.§ Premixed ethylene-air laminar flame The third case we consider is a one-dimensional freely-propagating planar laminar premixed flame of the ethylene-air mixture. In addition to the chemical reactions that govern the evolution of homogeneous reactors of the previous two cases, this case has effects of convection and diffusion that influence the thermo-chemical evolution. The chemistry is represented by the same 32-chemical species, 206-reactions mechanism <cit.>, resulting in n_v = 33 features. The freely-propagating flame is simulated in a one-dimensional domain of 0.02 m discretized with a grid of around 550 points. The pressure is kept at 1 atm, and a parametric variation is considered for the unburnt mixture conditions. Analogous to the ensemble training performed in Sec. <ref>, to construct the required training and testing data, we perturb the unburnt mixture temperature and equivalence ratio, (T, ϕ), by Δ T = ± 50 K and Δϕ = ± 0.25 from the reference state, i.e., D_1 ≡ (T = 300 K, ϕ = 0.6). This effectively results in nine configurations, D_i ∀ i ∈{ 1, 2, ⋯, 9 }, one for each combination of (T, ϕ) where T ∈{ 250 K,300 K,350 K} and ϕ ∈{ 0.575, 0.6, 0.625 }. Again, to generate the CoK-PCA and PCA reduced manifolds, the principal components are computed with respect to the scaled reference state, X_1, by selecting n_q = 5 leading principal vectors out of the n_v = 33 vectors that capture approximately 99% of the variance and 98% of the kurtosis in the dataset, respectively. Following the dimensionality reduction procedure in Sec. <ref>, we compute the score matrices, 𝐙^4_q and 𝐙^2_q for the CoK-PCA and PCA low-dimensional manifolds, respectively. For the ANN training, a similar split of the data into training and testing sets, as Sec. <ref>, is performed here; D_1, D_2, D_4, D_6, and D_8 are used for training and the rest for testing. Accordingly, we construct the input feature vectors and ground truths to train a neural network with four hidden layers of widths 48, 48, 48, and 56 neurons. The widths of input and output layers are n_q = 5 and n_v = 33 neurons, respectively. The layer activation functions remain the same as before, a hyperbolic tangent function, with the use of Adam optimizer (learning rate = 1e-4) for training. Figures <ref> (a) and (b) depict the loss curves obtained for CoK-PCA-ANN and PCA-ANN, respectively. Following Sec. <ref>, we assess the reconstruction accuracy of the trained models on the test states, i.e., D_j ∀ j ∈{ 3, 5, 7, 9 }. Similar to the trends observed in previous cases, ANN reconstruction outperforms linear reconstruction for all the quantities of interest, the plots of which are not presented here for brevity. With reconstruction based on ANNs, we next focus on the performance of CoK-PCA-ANN against PCA-ANN in terms of the error ratios (r_a, r_m), which are presented in Fig. <ref>. For the accuracy of thermo-chemical scalars, we observe a different trend in this case, with PCA-ANN being more accurate than CoK-PCA-ANN for 19 out of the 33 variables for r_a. However, as hypothesized, CoK-PCA-ANN performs better than PCA-ANN in terms of the r_m metric in accurate predictions of 21 out of the 33 variables. Further, while comparing errors in the reconstruction of species production rates and heat release rates, CoK-PCA-ANN dominates over PCA-ANN in both error ratios. In particular, CoK-PCA-ANN significantly improves upon PCA-ANN by predicting production rates for 22 out of 32 species in terms of the r_m error and 18 out of 32 species in terms of the r_a error. More importantly, it incurs lower errors in reconstructing the heat release rate in both metrics, which is an overall measure of the fidelity of the chemical system. This case clearly illustrates the fact that errors in reconstructing the thermo-chemical state alone might not be a sufficient measure of accuracy for a given dimensionality reduction technique, and a broader set of metrics might be prudent. The profile of absolute errors in heat release rates obtained for both the methods, CoK-PCA-ANN (dashed blue) and PCA-ANN (solid red), is shown in Fig. <ref> for the four test states, D_3, D_5, D_7, and D_9. We observe that CoK-PCA-ANN outperforms PCA-ANN in accurately predicting the steady-state flame location for all the test states, thereby characterizing flame propagation better. This behavior is consistent with the r_m errors presented in Fig. <ref> (d). Further, both techniques capture the non-reacting regions reasonably well in all the test states. However, in these regions, CoK-PCA-ANN performs marginally better than PCA-ANN by predicting nearly zero heat release rates for the test flames, D_5, D_7, and D_9 (Figures <ref> (b), (c), (d)). It should be noted that negligible reconstruction errors incurred by the methods in these regions (i.e., predicting non-zero heat release in the non-reacting zones) can be attributed to statistical inconsistencies or stochasticity of the ANN training process. Consequently, this is reflected in the r_a metric (average error), which is lesser in the case of CoK-PCA-ANN than PCA-ANN (demonstrated by blue bars) in Fig. <ref> (c). §.§ Homogeneous charge compression ignition In this section, we examine a dataset that encompasses the influence of spatial transport involving convection and diffusion and turbulence. The dataset focuses on homogeneous charge compression ignition (HCCI) of ethanol, which is representative of internal combustion engines <cit.>. The simulation corresponds to high-pressure, high-temperature auto-ignition of a turbulent mixture composed of premixed ethanol-air and combustion products, emulating the process of “exhaust gas recirculation” (EGR). The simulation is performed in a fully periodic domain with a two-dimensional spatial grid of 672 × 672 points. The initial conditions include a nominal pressure of 45atm and a mean temperature of 924K. The reactants are set to an equivalence ratio of 0.4. To account for the uneven mixing caused by EGR, a spatial temperature fluctuation and a separately computed divergence-free turbulent velocity field are superimposed onto the system. Furthermore, the simulation also considers the effects of compression heating resulting from the motion of the piston. The chemistry is represented by a 28 chemical species reaction mechanism. Thus, at each simulation snapshot, the design matrix, 𝐃 consists of n_g = 672 × 672 data samples and n_v = 29 thermo-chemical scalars. For this study, we consider the temporal checkpoint at t = 1.2 ms <cit.>, which corresponds to the propagation of the flame fronts in the bulk of the global domain, as shown in the heat release rate contours in Fig. <ref>, which has been saturated to a peak heat release rate of 1 × 10^9 Jm^-3s^-1 in order to demonstrate the growth in the size of the ignition kernels. For the testing state, we consider the simulation snapshot at 1.19 ms. In other words, we are interested in investigating the efficacy of the proposed CoK-PCA-ANN method in predicting the thermo-chemical state at an unseen state (t = 1.19 ms) while being trained on a subsequent checkpoint at t = 1.2 ms. To obtain the score matrices 𝐙^4_q and 𝐙^2_q, we use the principal vectors computed on the reference state, i.e., on t = 1.2 ms. The low-dimensional manifolds are constructed by retaining n_q = 5 out of n_v = 29 principal vectors that correspond to approximately 99% of the variance and kurtosis in the reduced PCA and CoK-PCA manifolds, respectively. The next step involves constructing the required train and test data comprising input feature vectors and corresponding ground truths. It should be noted that since this is a two-dimensional dataset, we suitably flatten it to yield N = 451584 data samples. Further, a neural network with three hidden layers of widths 8, 8, and 64 neurons is trained till convergence with an Adam optimizer (learning rate = 0.028). In addition, early stopping is employed to ensure the network does not lead to overfitting of the training data. The corresponding loss curves obtained for the manifolds are presented in Fig. <ref>. We then use the trained network to predict the thermo-chemical scalars at t = 1.19 ms for both CoK-PCA and PCA reduced manifolds. In a similar manner, using the reconstructed thermo-chemical scalars, species production rates and heat release rates are computed. In the following, we present a comparison of CoK-PCA-ANN and PCA-ANN in terms of the reconstruction errors of the aforementioned quantities. In Fig. <ref>, it is evident that CoK-PCA-ANN performs significantly better than PCA-ANN in the reconstruction of thermo-chemical scalars with more accurate predictions of 20 out of 29 species in both r_a and r_m errors. Furthermore, it completely dominates PCA-ANN in accurately reconstructing production rates for 90% and 93% of the species in terms of r_a and r_m metrics, respectively. In addition, it provides an accurate representation of the chemical dynamics in the reaction zones by incurring lower reconstruction errors for heat release rates in both metrics (r_m, r_a). Contrary to the observations in <cit.>, where the CoK-PCA-based manifold performed poorly in terms of the average errors (r_a) while considering the entire spatial domain, CoK-PCA, when coupled with ANN, overcomes this issue and better represents the stiff chemical dynamics in the average error as well. Next, we plot the reconstructed heat release rate contours in Fig. <ref> to get a qualitative difference in the magnitude of the heat release rate within the ignition kernels achievable through CoK-PCA and PCA-reduced manifolds. Due to the inherent ability of excess kurtosis to suitably capture outliers, CoK-PCA-ANN identifies the ignition zones better (as demonstrated by the better matching of color hues with the original) in the entire domain, which is in good agreement with the lesser maximum error (r_m) as shown in Fig. <ref> (d) for heat release rate. Additionally, due to the coupling of ANN with CoK-PCA, the non-igniting regions are also represented better than PCA-ANN, leading to a lower average error (r_a) as presented in Fig. <ref> (c). § CONCLUSIONS AND FUTURE WORK In this paper, we have proposed an enhanced version of the co-kurtosis PCA (CoK-PCA) based dimensionality reduction method, namely CoK-PCA-ANN, which leverages the potential of artificial neural networks (ANNs) to model complex nonlinear relationships inherent between the aggressively truncated low-dimensional manifolds and the original thermo-chemical state. The rationale behind this work is to assess the overall efficacy of the CoK-PCA method in comparison to PCA in conjunction with nonlinear reconstruction methods and expand its applicability to chemically reacting systems presenting stiff dynamics. A brief overview of the various state-of-the-art nonlinear reconstruction methods, such as ANNs, gaussian process regression (GPR), kernel density methods, autoencoders, etc., combined with PCA was discussed. In contrast, these methods are yet to be explored in the CoK-PCA framework which motivates this work. The framework of the proposed CoK-PCA-ANN dimensionality reduction method was presented with a discussion on the generation of the low-dimensional manifold using linear projection (encoding) with CoK-PCA followed by nonlinear reconstruction of the original thermochemical state space (decoding) using ANNs. The performance of the CoK-PCA-ANN method was benchmarked with linear CoK-PCA and PCA-ANN methods for four combustion test cases that characterize various physical and chemical phenomena in reacting flows (e.g., autoignition, flame propagation): a homogeneous reactor simulation representing conventional single-stage and complex two-stage autoignition, a one-dimensional freely propagating laminar premixed flame exhibiting flame propagation, and two-dimensional turbulent autoignition in homogeneous charge compression ignition conditions. In contrast to linear methods, ANNs demonstrated significantly high reconstruction accuracies for the CoK-PCA and PCA manifolds in terms of thermo-chemical scalars, species production rates, and heat release rates with aggressive truncation (low n_q). Further, the quality of the manifolds was assessed in conjunction with ANN for the aforementioned quantities of interest. As hypothesized, CoK-PCA-ANN outperforms PCA-ANN in all the test cases in terms of the maximum r_m errors for the thermo-chemical scalars, species production rates, and most importantly, heat release rates, thereby reinforcing the fact that the chemical kinetics prevalent in the ignition zones representative of stiff dynamics is captured more accurately by CoK-PCA than PCA. Contrary to the findings in the previous assessment of plain vanilla CoK-PCA <cit.>, CoK-PCA-ANN incurred lower reconstruction errors in the average error metric (r_a) as well with a better representation of the unburnt reactants and burnt products in all the test cases. Additionally, CoK-PCA-ANN outperforms PCA-ANN in accurately predicting an unseen test state different from the training set considered in the 2D HCCI case. To summarize, the results from the above analyses suggest that CoK-PCA-ANN realizes the advantages of both CoK-PCA and ANNs and proves reliable, robust, and generalizable to unseen thermo-chemical states that share similar ignition kinetics as the training state. However, it should be remarked that, in this paper, the investigation of CoK-PCA-based nonlinear reconstruction using ANNs was carried out in an a priori setting. It is well known that these data-driven dimensionality reduction methods are capable of accelerating numerical simulations of reacting flows by solving a reduced set of principal component transport equations as opposed to solving a very high-dimensional system of species conservation equations. Such a kind of a posteriori validation performed for PCA remains to be explored for CoK-PCA and therefore forms the future scope of this paper. § ACKNOWLEDGMENTS The work at IISc was supported under a project from the National Supercomputing Mission, India. DN is a recipient of the Ansys M.Tech. (Research) Fellowship. AJ was funded by a project from Shell Technology Center, Bengaluru, India. KA is a recipient of the Arcot Ramachandran Young Investigator award, IISc. Work by HK and UB was part of the ExaLearn Co-design Center, supported by the Exascale Computing Project (17-SC-20-SC), a collaborative effort of the U.S. Department of Energy Office of Science and the National Nuclear Security Administration. Sandia National Laboratories is a multi-mission laboratory managed and operated by National Technology and Engineering Solutions of Sandia, LLC., a wholly owned subsidiary of Honeywell International, Inc., for the U.S. Department of Energy’s National Nuclear Security Administration under contract DE-NA-0003525. The views expressed in the article do not necessarily represent the views of the U.S. Department of Energy or the United States Government. elsarticle-num
http://arxiv.org/abs/2307.01318v1
20230703194413
A contraction-recursive algorithm for treewidth
[ "Hisao Tamaki" ]
cs.DS
[ "cs.DS" ]
Toward Blockchain-based Fashion Wearables in the Metaverse: the Case of Decentraland Amaury Trujillo IIT-CNR Pisa, Italy amaury.trujillo@iit.cnr.it Clara Bacciu IIT-CNR Pisa, Italy clara.bacciu@iit.cnr.it August 1, 2023 ===================================================================================================================================== Let (G) denote the treewidth of graph G. Given a graph G and a positive integer k such that (G) ≤ k + 1, we are to decide if (G) ≤ k. We give a certifying algorithm RTW ("R" for recursive) for this task: it returns one or more tree-decompositions of G of width ≤ k if the answer is YES and a minimal contraction H of G such that (H) > k otherwise. Starting from a greedy upper bound on (G) and repeatedly improving the upper bound by this algorithm, we obtain (G) with certificates. RTW uses a heuristic variant of Tamaki's PID algorithm for treewidth (ESA2017), which we call HPID. Informally speaking, PID builds potential subtrees of tree-decompositions of width ≤ k in a bottom up manner, until such a tree-decomposition is constructed or the set of potential subtrees is exhausted without success. HPID uses the same method of generating a new subtree from existing ones but with a different generation order which is not intended for exhaustion but for quick generation of a full tree-decomposition when possible. RTW, given G and k, interleaves the execution of HPID with recursive calls on G /e for edges e of G, where G / e denotes the graph obtained from G by contracting edge e. If we find that (G / e) > k, then we have (G) > k with the same certificate. If we find that (G / e) ≤ k, we "uncontract" the bags of the certifying tree-decompositions of G / e into bags of G and feed them to HPID to help progress. If the question is not resolved after the recursive calls are made for all edges, we finish HPID in an exhaustive mode. If it turns out that (G) > k, then G is a certificate for (G') > k for every G' of which G is a contraction, because we have found (G / e) ≤ k for every edge e of G. This final round of HPID guarantees the correctness of the algorithm, while its practical efficiency derives from our methods of "uncontracting" bags of tree-decompositions of G / e to useful bags of G, as well as of exploiting those bags in HPID. Experiments show that our algorithm drastically extends the scope of practically solvable instances. In particular, when applied to the 100 instances in the PACE 2017 bonus set, the number of instances solved by our implementation on a typical laptop, with the timeout of 100, 1000, and 10000 seconds per instance, are 72, 92, and 98 respectively, while these numbers are 11, 38, and 68 for Tamaki's PID solver and 65, 82, and 85 for his new solver (SEA 2022). Toward Blockchain-based Fashion Wearables in the Metaverse: the Case of Decentraland Amaury Trujillo IIT-CNR Pisa, Italy amaury.trujillo@iit.cnr.it Clara Bacciu IIT-CNR Pisa, Italy clara.bacciu@iit.cnr.it August 1, 2023 ===================================================================================================================================== § INTRODUCTION Treewidth is a graph parameter introduced and extensively studied in the graph minor theory <cit.>. A tree-decomposition of graph G is a tree with each node labeled by a vertex set of G, called a bag, satisfying certain conditions (see Section <ref>) so that those bags form a tree-structured system of vertex-separators of G. The width w(T) of a tree-decomposition T is the maximum cardinality of a bag in T minus one and the treewidth (G) of graph G is the smallest k such that there is a tree-decomposition of G of width k. The impact of the notion of treewidth on the design of combinatorial algorithms is profound: there are a huge number of NP-hard graph problems that are known to be tractable when parameterized by treewidth: they admit an algorithm with running time f(k)n^O(1), where n is the number of vertices, k is the treewidth of the given graph, and f is some typically exponential function (see <cit.>, for example). Those algorithms typically perform dynamic programming based on the system of separators provided by the tree-decomposition. To make such algorithms practically useful, we need to compute the treewidth, or a good approximation of the treewidth, together with an associated tree-decomposition. Computing the treewidth (G) of a given graph G is NP-complete <cit.>, but is fixed-parameter tractable <cit.>. In particular, the algorithm due to Bodlaender <cit.> runs in time linear in the graph size with a factor of 2^O((G)^3). Unfortunately, this algorithm does not seem to run efficiently in practice. In more practical approaches to treewidth computation, triangulations of graphs play an important role. A triangulation of graph G is a chordal graph H with V(G) = V(H) and E(G) ⊆ E(H). For every tree-decomposition T of G, filling every bag of T into a clique gives a triangulation of G. Conversely, for every triangulation H of G, there is a tree-decomposition of G in which every bag is a maximal clique of H. Through this characterization of tree-decompositions in terms of triangulations, we can enumerate all relevant tree-decompositions by going through the total orderings on the vertex set, as each total ordering defines a triangulation for which the ordering is a perfect elimination order (see <cit.>, for example). Practical algorithms in the early stage of treewidth research performed a branch-and-bound search over these total orderings <cit.>. Dynamic programming on this search space results in a 2^n n^O(1) time algorithms <cit.>, which works well in practice for graphs with a small number of vertices. It should also be noted that classical upper bound algorithms, such as min-deg or min-fill, which heuristically choose a single vertex ordering defining a tree-decomposition, are fast and often give a good approximation of the treewidth in a practical sense <cit.>. Another important link between chordal graphs and treewidth computation was established by Bouchitté and Todinca <cit.>. They introduced the notion of potential maximal cliques (PMCs, see below in "Our approach" paragraph for a definition) and gave an efficient dynamic programming algorithm working on PMCs (BT dynamic programming) to find a minimal triangulation of the given graph that corresponds to an optimal tree-decomposition. They showed that their algorithm runs in polynomial time for many special classes of graphs. BT dynamic programming is also used in an exponential time algorithm for treewidth that runs in time O(1.7549^n) <cit.>. BT dynamic programming had been considered mostly of theoretical interest until 2017, when Tamaki presented its positive-instance driven (PID) variant, which runs fast in practice and significantly outperforms previously implemented treewidth algorithms <cit.>. Further efforts on treewidth computation based on or around his approach have been made since then, with some incremental successes <cit.>. In his most recent work <cit.>, Tamaki introduced another approach to treewidth computation, based on the use of contractions to compute tight lower bounds on the treewidth. For edge e of graph G, the contraction of G by e, denoted by G / e, is a graph obtained from G by replacing e by a new single vertex v_e and let v_e be adjacent to all neighbors of the ends of e in V(G) ∖ e. A graph H is a contraction of G if H is obtained from G by zero ore more successive contractions by edges. It is well-known and easy to see that (H) ≤(G) for every contraction H of G. This fact has been used to quickly compute reasonably good lower bounds on the treewidth of a graph, typically to be used in branch-and-bound algorithms mentioned above <cit.>. Tamaki <cit.> gave a heuristic method of successively improving contraction based lower bounds which, together with a separate heuristic method for upper bounds, quite often succeeds in computing the exact treewidth of instances that are hard to solve for previously published solvers. §.§.§ Our approach Our approach is based on the observation that contractions are useful not only for computing lower bounds but also for computing upper bounds. Suppose we have a tree-decomposition T of G / e of width k for some edge e = {u, v} of k. Let v_e be the vertex to which e contracts. Replacing each bag X of T by X', where X' = X ∖{v_e}∪{u, v} if v_e ∈ X and X' = X otherwise, we obtain a tree-decomposition T' of G of width ≤ k + 1, which we call the uncontraction of T. In a fortunate case where every bag X of T with v_e ∈ X has |X| ≤ k, the width of T' is k. To increase the chance of having such fortunate cases, we deal with a set of tree-decompositions rather than a single tree-decomposition. We represent such a set of tree-decompositions by a set of potential maximal cliques as follows. A vertex set of G is a potential maximal clique (PMC for short) if it is a maximal clique of some minimal triangulation of G. Let Π(G) denote the set of all PMCs of G. For each Π⊆Π(G), let _Π(G) denote the set of all tree-decompositions of G whose bags all belong to Π. Let _Π(G) denote the smallest k such that there is a tree-decomposition in _Π(G) of width k; we set _Π(G) = ∞ if _Π(G) = ∅. Bouchitté and Todinca <cit.> showed that _Π(G)(G) contains a tree-decomposition of width (G) and developed a dynamic programming algorithm (BT dynamic programming) to find such a tree-decomposition. Indeed, as Tamaki <cit.> noted, BT dynamic programming can be used for arbitrary Π⊆Π(G) to compute _Π(G) in time linear in |Π| and polynomial in |V(G)|. A set of PMCs is a particularly effective representation of a set of tree-decompositions for our purposes, because BT dynamic programming can be used to work on Π⊆Π(G) and find a tree-decomposition in _Π(G) that minimizes a variety of width measures based on bag weights. In our situation, suppose we have Π⊆Π(G / e) such that _Π(G / e) = k. Using appropriate bag weights, we can use BT dynamic programming to decide if _Π(G / e) contains T such that the uncontraction T' of T has width k and find one if it exists. These observation suggests a recursive algorithm for improving an upper bound on treewidth. Given graph G and k such that (G) ≤ k + 1, the task is to decide if (G) ≤ k. Our algorithm certifies the YES answer by Π⊆Π(G) with _Π(G) ≤ k. It uses heuristic methods to find such Π and, when this goal is hard to achieve, recursively solve the question if (G / e) ≤ k for edge e of G. Unless (G / e) = k + 1 and hence (G) = k + 1, the recursive call returns Π⊆Π(G / e) such that _Π(G / e) ≤ k. We use the method mentioned above to look for T ∈_Π(G / r) whose uncontraction has width ≤ k. If we are successful, we are done for G. Even when this is not the case, the uncontractions of tree-decompositions in _Π(G / e) may be useful for our heuristic upper bound method in the following manner. In <cit.>, Tamaki proposed a local search algorithm for treewidth in which a solution is a set of PMCs rather than an individual tree-decomposition and introduced several methods of expanding Π⊆Π(G) into Π' ⊃Π in hope of having _Π'(G) < _Π(G). His method compares favourably with existing heuristic algorithms but, like typical local search methods, is prone to local optima. To let the search escape from a local optimum, we would like to inject "good" PMCs to the current set Π. It appears that tree-decompositions in _Π'(G / e) such that _Π' (G / e) ≤ k, where k = _Π(G) - 1, are reasonable sources of such good PMCs: we uncontract T ∈_Π'(G / e) into a tree-decomposition T' of G and extract PMCs of G from T'. Each such PMC appears in a tree-decomposition of width ≤ k + 1 and may appear in a tree-decomposition of width ≤ k. It is also important that Π' is obtained, in a loose sense, independently of Π and not under the influence of the local optimum around which Π stays. Our algorithm for deciding if (G) ≤ k interleaves the execution of a local search algorithm with recursive calls on G / e for edges e of G and injects PMCs obtained from the results of the recursive calls. This process ends in either of the following three ways. * The local search succeeds in finding Π with _Π(G) ≤ k. * A recursive call on G / e finds that (G / e) = k + 1: we conclude that (G) = k + 1 on the spot. * Recursive calls G / e have been tried for all edges e and it is still unknown if (G) ≤ k. We invoke a conventional exact algorithm for treewidth to settle the question. Note that, when the algorithm concludes that (G) = k + 1, there must be a contraction H of G somewhere down in the recursion path from G such that Case 3 applies and the exact computation shows that (H) = k + 1. In this case, H is a minimal contraction of G that certifies (G) = k + 1, as the recursive calls further down from H have shown (H / e) ≤ k for every edge e of H. As the experiments in Section <ref> show, this approach drastically extends the scope of instances for which the exact treewidth can be computed in practice. *Organization To quickly grasp the main ideas and contributions of this paper, it is suggested to read the following sections first: Section <ref> – Main algorithm, Section <ref> – Uncontracting PMCs, Section <ref> – Contracting PMCs, and Section <ref> – Experiments (about 9 pages in total including the introduction), together with some parts of the preliminaries section as needed. Section <ref> describes some details of the local search algorithm we use, namely heuristic PID. Sections <ref>, <ref>, and <ref> describe additional techniques for speeding up the main algorithm. Sections <ref> offers some concluding remarks. The source code of the implementation of our algorithm used in the experiments is available at <https://github.com/twalgor/RTW>. § PRELIMINARIES *Graphs and treewidth In this paper, all graphs are simple, that is, without self loops or parallel edges. Let G be a graph. We denote by V(G) the vertex set of G and by E(G) the edge set of G. As G is simple, each edge of G is a subset of V(G) with exactly two members that are adjacent to each other in G. The complete graph on V, denoted by K(V), is a graph with vertex set V in which every vertex is adjacent to all other vertices. The subgraph of G induced by U ⊆ V(G) is denoted by G[U]. We sometimes use an abbreviation G ∖ U to stand for G[V(G) ∖ U]. A vertex set C ⊆ V(G) is a clique of G if G[C] is a complete graph. For each v ∈ V(G), N_G(v) denotes the set of neighbors of v in G: N_G(v) = {u ∈ V(G) |{u, v}∈ E(G)}. For U ⊆ V(G), the open neighborhood of U in G, denoted by N_G(U), is the set of vertices adjacent to some vertex in U but not belonging to U itself: N_G(U) = (⋃_v ∈ U N_G(v)) ∖ U. We say that vertex set C ⊆ V(G) is connected in G if, for every u, v ∈ C, there is a path in G[C] between u and v. It is a connected component or simply a component of G if it is connected and is inclusion-wise maximal subject to this condition. We denote by (G) the set of all components of G. When the graph G is clear from the context, we denote (G[U]) by (U). A vertex set S ⊆ V(G) is a separator of G if G ∖ S has more than one component. A graph is a cycle if it is connected and every vertex is adjacent to exactly two vertices. A graph is a forest if it does not have a cycle as a subgraph. A forest is a tree if it is connected. A tree-decomposition of G is a pair (T, ) where T is a tree and is a family {X_i}_i ∈ V(T) of vertex sets of G, indexed by the nodes of T, such that the following three conditions are satisfied. We call each X_i the bag at node i. * ⋃_i ∈ V(T) X_i = V(G). * For each edge {u, v}∈ E(G), there is some i ∈ V(T) such that u, v ∈ X_i. * For each v ∈ V(G), the set of nodes I_v = {i ∈ V(T) | v ∈ X_i}⊆ V(T) is connected in T. The width of this tree-decomposition is max_i ∈ V(T) |X_i| - 1. The treewidth of G, denoted by (G) is the smallest k such that there is a tree-decomposition of G of width k. For each pair (i, j) of adjacent nodes of a tree-decomposition (T, ) of G, let T(i, j) denote the subtree of T consisting of nodes of T reachable from i without passing j and let V(i, j) = ⋃_k ∈ V(T(i, h)) X_k. Then, it is well-known and straightforward to show that X_i ∩ X_j = V(i, j) ∩ V(j, i) and there are no edges between V(i, j) ∖ V(j, i) and V(j, i) ∖ V(i, j); X_i ∩ X_j is a separator of G unless V(i, j) ⊆ V(j, i) or V(j, i) ⊆ V(j, i). We say that T uses separator S if there is an adjacent pair (i, j) such that S = X_i ∩ X_j. In this paper, we assume G is connected whenever we consider a tree-decomposition of G. In this paper, most tree-decompositions are such that X_i = X_j only if i = j. Because of this, we use a convention to view a tree-decomposition of G as a tree T whose nodes are bags (vertex sets) of G. *Triangulations, minimal separators, and Potential maximal cliques Let G be a graph and S a separator of G. For distinct vertices a, b ∈ V(G), S is an a-b separator if there is no path between a and b in G ∖ S; it is a minimal a-b separator if it is an a-b separator and no proper subset of S is an a-b separator. A separator is a minimal separator if it is a minimal a-b separator for some a, b ∈ V(G). Graph H is chordal if every induced cycle of H has exactly three vertices. H is a triangulation of graph G if it is chordal, V(G) = V(H), and E(G) ⊆ E(H). A triangulation H of G is minimal if it there is no triangulation H' of G such that E(H') is a proper subset of E(H). It is known (see <cit.> for example) that if H is a minimal triangulation of G then every minimal separator of H is a minimal separator of G. In fact, the set of minimal separators of H is a maximal set of pairwise non-crossing minimal separators of G, where two separators S and R cross each other if at least two components of G ∖ S intersects R. Triangulations and tree-decompositions are closely related. For a tree-decomposition T of G, let (G, T) denote the graph obtained from G by filling every bag of T into a clique. Then, it is straightforward to see that (G, T) is a triangulation of G. Conversely, for each chordal graph H, consider a tree on the set of all maximal cliques of H such that if X, Y ∈ are adjacent to each other then X ∩ Y is a minimal separator of H. Such a tree is called a clique tree of H. It is straightforward to verify that a clique tree T of a triangulation H of G is a tree-decomposition of G and that (G, T) = H. We call a tree-decomposition T of G minimal if it is a clique tree of a minimal triangulation of G. It is clear that there is a minimal tree-decomposition of G of width (G), since for every tree-decomposition T of G, there is a minimal triangulation H of G that is a subgraph of (G, T) and every clique tree T' of H has w(T') ≤ w(T). A vertex set X ⊆ V(G) is a potential maximal clique, PMC for short, of G, if X is a maximal clique in some minimal triangulation of G. We denote by Π(G) the set of all potential maximal cliques of G. By definition, every bag of a minimal tree-decomposition of G belongs to Π(G). *Bouchitté-Todinca dynamic programming For each Π⊆Π(G), say that Π admits a tree-decomposition T of G if every bag of T belongs to Π. Let _Π(G) denote the set of all tree-decompositions of G that Π admits and let _Π(G) denote the smallest k such that there is T ∈_Π(G) of width k; we set _Π(G) = ∞ if _Π(G) = ∅. The treewidth algorithm of Bouchitté and Todinca <cit.> is based on the observation that (G) = _Π(G)(G). Given G, their algorithm first constructs Π(G) and then search through _Π(G)(G) by dynamic programming (BT dynamic programming) to find T of width _Π(G)(G). As observed in <cit.>, BT dynamic programming can be used to compute _Π(G) for an arbitrary subset Π of Π(G) to produce an upper bound on (G). As we extensively use this idea, we describe how it works here. Fix Π⊆Π(G) such that _Π(G) is non-empty. To formulate the recurrences in BT dynamic programming, we need some definitions. A vertex set B of G is a block if B is connected and either N_G(B) is a minimal separator or is empty. As we are assuming that G is connected, B = V(G) in the latter case. A partial tree-decomposition of a block B in G is a tree-decomposition of G[B ∪ N_G(B)] that has a bag containing N_G(B), called the root bag of this partial tree-decomposition. Note that a partial tree-decomposition of block V(G) is a tree-decomposition of G. For graph G and block B, let _Π(B, G) denote the set of all partial tree-decompositions of B in G all of whose bags belong to Π and, when this set is non-empty, let _Π(B, G) denote the smallest k such that there is T ∈_Π(B, G) with w(T) = k; if _Π(B, G) is empty we set _Π(B, G) = ∞. A PMC X of G is a cap of block B if N_G(B) ⊆ X and X ⊆ B ∪ N_G(B). Note that a cap of B is a potential root bag of a partial tree-decomposition of B. For each block B, let _Π(B) denote the set of all caps of B belonging to Π. Recall that, for each vertex set U ⊆ V(G), (U) denotes the set of components of G[U]. The following recurrence holds. _Π(G, B) = min_X ∈_Π(B)max{|X| - 1, max_C ∈(B ∖ X)_Π(G, C)}} BT dynamic programming evaluates this recurrence for blocks in the increasing order of cardinality and obtains _Π(G) = _Π(G, V(G)). Tracing back the recurrences, we obtain a tree-decomposition T ∈_Π(G) with w(T) = _Π(G). Tamaki's PID algorithm <cit.>, unlike the original algorithm of Bouchitté and Todinca <cit.>, does not construct Π(G) before applying dynamic programming. It rather uses the above recurrence to generate relevant blocks and PMCs. More precisely, PID is for the decision problem whether (G) ≤ k for given G and k and it generates all blocks C with (C, G) ≤ k using the recurrence in a bottom up manner. We have (G) ≤ k if and only if V(G) is among those generated blocks. *Contractors and contractions To extend the notation G / e of a contraction by an edge to a contraction by multiple edges, we define contractors. A contractor γ of G is a partition of V(G) into connected sets. For contractor γ of G, the contraction of G by γ, denoted by G / γ, is the graph obtained from G by contracting each part of γ to a single vertex, with the adjacency inherited from G. For notational convenience, we also view a contractor γ as a mapping from V(G) to {1, 2, …, m}, the index set of the parts of the partition γ. In this view, the vertex set of G / γ is {1, 2, …, m} and γ(v) for each v ∈ V(G) is the vertex of G / γ into which v is contracted. For each w ∈ V(G / γ), γ^-1(w) is the part of the partition γ that contracts to w. For U ⊆ V(G / γ), we define γ^-1(U) = ⋃_w ∈ Uγ^-1(w). § MAIN ALGORITHM The pseudo code in Algorithm <ref> shows the main iteration of our treewidth algorithm. It starts from a greedy upper bound and repeatedly improves the upper bound by algorithm RTW. The call RTW(G, k, Π), where Π⊆Π(G) and _Π(G) ≤ k + 1, decides if (G) ≤ k. If (G) ≤ k, it returns YES with certificate Π' ⊆Π(G) such that _Π'(G) ≤ k; otherwise it returns NO with certificate H, a minimal contraction of G such that (H) = k + 1. The pseudo code in Algorithm <ref> describes RTW in its basic form. We sketch here the functions of subalgorithms used in this algorithm. More details can be found in subsequent sections. Our method of local search in the space of sets of PMCs is a heuristic variant, which we call HPID, of the PID algorithm due to Tamaki <cit.>. PID constructs partial tree-decompositions of width ≤ k using the recurrence of BT dynamic programming in a bottom up manner to exhaustively generate all partial tree-decompositions of width ≤ k, so that we have a tree-decomposition of width ≤ k if and only if (G) ≤ k. HPID uses the same recurrence to generate partial tree-decompositions of width ≤ k but the aim is to quickly generate a tree-decomposition of G of width ≤ k and the generation order it employs does not guarantee exhaustive generation. The state of HPID computation is characterized by the set Π of root bags of the generated partial tree-decompositions. Recall that the bags of the set of partial tree-decompositions generated by the BT recurrence are PMCs, so Π⊆Π(G). Using BT dynamic programming, we can reconstruct the set of partial tree-decompositions from Π, if needed, in time linear in |Π| and polynomial in |V(G)|. Thus, we may view HPID as performing a local search in the space of sets of PMCs. This view facilitates communications between HPID and external upper bound heuristics. Those communications are done through the following operations. We consider each invocation of HPID as an entity having a state. Let s denote such an invocation instance of HPID for G and k. Let Π(s) denote the set of PMCs that are root bag of the partial tree-decompositions generated so far by s. The following operations are available. s.() returns _Π(s)(G). s.() returns the set of PMCs that are the root bags of the partial tree-decompositions of width ≤ s.width() generated so far by s. s.(Π) updates Π(s) to Π(s) ∪Π and updates the set of partial tree-decompositions by BT dynamic programming. s.() generates more partial tree-decompositions under the specified budget, in terms of the number of search step spent for the generation. s.() exhaustively generates remaining partial decompositions of width ≤ k, thereby deciding if (G) ≤ k. See Section <ref> for details of these procedures. We use two additional procedures. (Π, G, e), where e is an edge of G and Π⊆Π(G /e), returns Π' ⊆Π(G) such that _Π'(G) ≤_Π(G / e) + 1 and possibly _Π'(G) ≤_Π(G / e) (Π, G, e), where e is an edge of G and Π⊆Π(G), returns Π' ⊆Π(G / e) such that _Π'(G / e) ≤_Π(G) and possibly _Π'(G / e) ≤_Π(G) - 1 See Sections <ref> and <ref> for details of these procedures. Given these procedures, RTW works as follows. It receives G, k, and Π such that _Π(G) ≤ k + 1 and creates an HPID instance s for G and k and let it import Π. If it turns out that _Π(G) ≤ k at this point, then RTW returns YES with s.(), a subset Π' of Π such that _Π'(G) ≤ k, as the certificate. Otherwise, it orders the edges of G in such a way to increase the chance of having an edge e width (G / e) = k + 1 early in the list if any. Then it iterates over those edges. To process e_i it makes a recursive call RTW(G / e_i, k, Θ) where Θ⊆Π(G / e) is obtained by "contracting" Π. If the result is negative, the answer of RTW(G, k , Π) is also negative with the same certificate. If the result is positive with Ψ⊆Π(G / e_i), then Ψ is "uncontracted" to Ψ' ⊆Π(G), which is imported to s. Then it lets s advance its PID state under a budget proportional to i. If s succeeds in finding tree-decompositions of G of width k, then RTW returns YES with the certificate constructed by s. Otherwise, it proceeds to the next edge. When it has tried all edges without resolving the question, it lets s finish the exhaustive generation of partial tree-decompositions to answer the question. If it turns out that (G) ≤ k, it returns YES with the certificate provided by s. Otherwise it returns NO with the certificate being G itself. The correctness of this algorithm can be proved by straightforward induction and does not depend on the procedures , , or except that the procedure (Π, G, e) must return Θ such that _Θ(G /e) ≤_Π(G) as promised. On the other hand, practical efficiency of this algorithm heavily depends on the performances of these procedures. If they collectively work really well, then we expect that the for loop would exit after trying only a few edges, assuming (G) ≤ k, and s.() would be called only if (G) = k + 1 and (G / e) ≤ k for every edge e. On the other extreme of perfect incapability of these procedures, the for loop would always run to the end and s.() would be called in every call of RTW(G, k, Π), making the recursion totally meaningless. Our efforts are devoted to developing effective methods for these procedures. § HEURISTIC PID In this section, we give some details of the HPID algorithm. In particular, we describe in some details how the procedures () and () work. We first describe how we use Recurrence <ref> to generate a new partial tree-decomposition from existing ones. The method basically follows that of PID <cit.> but there are some differences. The most important difference is in the manners we turn tree-decompositions into rooted tree-decompositions, which is done in order to restrict partial tree-decompositions to be generated. In the original PID, the choice of roots heavily depends on the total order assumed on V(G). For the sake of interactions of HPID with other upper bound components through PMCs, we prefer the choice to depend less on the vertex order and thus be fairer for vertices. Fix G and k. We assume a total order < on V(G) and say that U ⊆ V(G) is larger then V ⊆ V(G) if |U| > |V| or |U| = |V| and U is lexicographically larger than V. We say that a block B of G is feasible if (B, G) ≤ k. We use recurrence <ref>, with Π set to Π(G), to generate feasible blocks. Our goal is to see if V(G) is feasible and, to this end, it turns out that we do not need to generate all feasible blocks: it suffices to generate only small feasible blocks except for V(G) itself, where a block B is small if there is some block B' with N_G(B') = N_G(B) such that B' > B. To see this, we construct rooted tree-decompositions from minimal triangulations of G. Let H be a minimal triangulation of G. We define a rooted tree D_H on the set of maximal cliques of H, which may be denoted by Π(H) because every PMC of H is a maximal clique. For X ∈Π(H) and a minimal separator S ⊂ X, let B(S, X) denote the full component of S that intersects X. Note that B(S, X) is a block since N_G(B(S, X)) = S is a minimal separator. For X, Y ∈Π(H) such that X ∩ Y is a separator of H, let S(X, Y) denote the inclusion-minimal minimal separator of H contained in X ∩ Y. Such an inclusion minimal separator is unique: if distinct S_1, S_2 ⊆ X ∩ Y are both inclusion-minimal separators, then both of the strict inclusions of B(S_1, X) ⊂ B(S_2, Y) and B(S_2, Y) ⊂ B(S_1, X) must hold, which is impossible. We first define a dag W_H on Π(H): for distinct X, Y ∈Π(H)there, W_H has an edge from X to Y if X ∩ Y is a separator of H and B(S(X, Y), Y) is larger than B(S(X, Y), X). W_H is acyclic. Suppose, for contradiction, there is a directed cycle in W_H and let X_1, …, X_m, X_m+1 = X_1 be the shortest such. Let S = S(X_1, X_2). It cannot be that m = 2, since then we would have both B(S, X_1) < B(S, X_2) and B(S, X_1) > B(S, X_2). Let i ≥ 2 be such that X_i ⊈B(S, X_1) ∪ S and X_i + 1⊆ B(S, X_1) ∪ S. Such i must exist since X_2 ⊈B(S, X_1) ∪ S and X_m + 1⊆ B(S, X_1) ∪ S. Let S' = S(X_i, X_i + 1). Since every block of H is either contained in B(S, X_1) or disjoint from it, we have B(S', X_i) ∩ B(S, X_1) = ∅ and B(S', X_i + 1) ⊆ B(S, X_1). Since S' separates these blocks, we must have S' ⊆ N_G(B(S, X_1)) = X_1 ∩ X_2. Since S and S' are both inclusion-minimal, we must have S = S' as argued above. Then, we have B(S, X_2) > B(S, X_1) ⊇ B(S, X_i + 1) > B(S, X_i) and therefore we have an edge from X_i to X_2, contradicting the assumption that our directed cycle is the shortest. Now we construct a directed tree D_H on Π(H) with a unique sink. As W_H is acyclic, it has a sink X_0. Let denote the set of components of G ∖ X_0. Each B ∈ is a block since N_H(B) ⊆ X_0 is a minimal-separator. Let Π(H, B) denote the set of maximal cliques of H contained in B ∪ N_H(B). Note that Π(H, B), B ∈, partitions Π(H) ∖{X_0}. For each such B, we construct a directed tree D_H(B) on Π(H, B) with unique sink X_B such that W_H has an edge from X_B to X_0. Combining D_H(B), B ∈, with these edges from X_B to X_0, we obtain D_H. It remains to show how we construct D_H(B). Observe that every B ∈ is small. For each small block B, we construct a directed tree D_H(B) on Π(H, B) with sink X_B such that N_H(B) ⊆ X_B inductively as follows. Let _B denote the set of caps of B belonging to Π(H). By the definition of caps, each X ∈_B satisfies N_H(B) ⊆ X and X ⊆ B ∪ N_H(B). The subgraph of W_H induced by _B has a sink X_B since W_H is acyclic. Let (X_B, B) denote the set of blocks of H that are components of B ∖ X_B. For each B' ∈(X_B, B), we have B' ⊆ B and, moreover, for each block C ≠ B of N_H(B), we have C ⊆ C(N_H(B'), X_B). Therefore, since there is a block C of N_H(B) such that C > B as B is small, we have C(N_H(B'), X_B) > B' for each B', that is, B' is small. By the induction hypothesis, we have a directed tree D_H(B') on Π(H, B') with sink X_B' such that N_H(B') ⊆ X_B', for each B' ∈(X_B, B). Combining D_H(B'), B' ∈(X, B), with an edge from each X_B' to X_B, we obtain the desired directed tree D_H(B). Let H be a minimal triangulation of G such that (H) = (G). In view of the existence of the rooted clique tree D_H of H, feasibility of V(G) can be determined by generating only small feasible blocks using recurrence <ref> and then seeing if the same recurrence can be used to show (G) = (V(G), G) = k. Thus, each HPID instance s maintains a set set of small feasible blocks. To generate a new feasible block to add to , it invokes a backtrack search procedure (B) on a block B ∈ which enumerates ⊆ such that * B ∈ and B is the largest block in and * there is a block B_ that is either small or is equal to V(G) and a PMC X_∈Π(G) such that (B_∖ X_) =. For each such found, we add B_ to since the recurrence <ref> shows that B_ is feasible. Procedure s.() uses this search procedure as follows. It uses a priority queue Q of small feasible blocks, in which larger blocks are given higher priority. It first put all blocks in to Q. Then, it dequeues a block B, call (B), and add newly generated feasible blocks to Q. This is repeated until either Q is empty or the cumulative number of search steps exceeds . Because of the queuing policy, there is a possibility of V(G) found feasible, when it is indeed feasible, even with a small budget. Procedure s.() works similarly, except that smaller blocks are given higher priority in the queue and the budget is unlimited, to generate all small feasible blocks and V(G) if it is feasible. An alternative way to to implement the () procedure is to call another exact treewidth algorithm based on BT dynamic programming, such as SemiPID <cit.>, to decide if V(G) is feasible. The implementation used in our experiment uses this alternative method. § MINIMALIZING TREE-DECOMPOSITIONS Given a graph G and a triangulation H of G, minimalizing H means finding a minimal triangulation H' of G such that E(H') ⊆ E(H). Minimalizing a tree-decomposition T of G means finding a minimal tree-decomposition T' of G whose bags are maximal cliques of the minimalization of (G, T). We want to minimalize a tree-decomposition for two reasons. One is our decision to represent a set of tree-decompositions by a set of PMCs. Whenever we get a tree-decomposition T by some method that may produce non-minimal tree-decompositions, we minimalize it to make all bags PMCs. Another reason is that minimalization may reduce the width. We have two procedures for minimalization. When the second reason is of no concern, we use (T) which is an implementation of one of the standard triangulation minimalization algorithm due to Blair et al <cit.>. When the second reason is important, we use (T), which finds a minimalization of T of the smallest width. This task is NP-hard, but the following algorithm works well in practice. Say a minimal separator of G is admissible for T if it is a clique of (G, T). Observe that, for every minimalization T' of T, every separator used by T' is a minimal separator of G admissible for T. We first construct the set of all minimal separators of G admissible for T. Then we apply the SemiPID variant of BT dynamic programming, due to Tamaki <cit.>, to this set and obtain a tree-decomposition of the smallest width, among those using only admissible minimal separators. Because of the admissibility constraint, the number of minimal separators is much smaller and both the enumeration part and the SemiPID part run much faster in practices than in the general case without such constraints. § UNCONTRACTING PMCS In this section, we develop an algorithm for procedure (G, Π, e). In fact, we generalize this procedure to (G, Π, γ), where the third argument is a general contractor of G. Given a graph G, Π⊆Π(G), and a contractor γ of G, we first find tree-decompositions T ∈_Π that minimize w(γ^-1(T)). This is done by BT dynamic programming over _Π(G / γ), using bag weights defined as follows. For each weight function ω that assigns weight ω(U) to each vertex set U, define the width of tree-decomposition T with respect to ω, denoted by (G, ω), to be the maximum of ω(X) over all bags of T. Thus, if ω is defined by ω(U) = |U| - 1 then (G, ω) = (G). A natural choice for our purposes is to set ω(X) = |γ^-1(X)| - 1. Then, the width of a tree decomposition T of G / γ with respect to this bag weight is w(γ^-1(T)). Therefore, BT dynamic programming with this weight function ω gives us the desired tree-decomposition in _Π(G / r). We actually use a slightly modified weight function, considering the possibility of reducing the weight of γ^-1(T) by minimalization. Let T ∈_Π(G / γ) and X a bag of T. If X' = γ^-1(X) is a PMC of G, then every minimalization of γ^-1(T) must contain X' as a bag. Therefore, if |X'| > k + 1 then it is impossible that the width of γ^-1(T) is reduced to k by minimalization. On the other hand, if X' is not a PMC, then no minimalization of γ^-1(T) has X' has a bag and there is a possibility that there is a minimalization of γ^-1(T) of width k even if |X'| > k + 1. These considerations lead to the following definition of our weight function ω. ω(U) = 2|γ^-1(U)| ω(U) = 2|γ^-1(U)| - 1 Algorithm <ref> describes the main steps of procedure (Π, G, γ). § CONTRACTING PMCS The algorithm for procedure is similar to that for . Given a graph G, Π⊆Π(G), and a contractor γ of G, we first find tree-decompositions T ∈_Π(G / γ) that minimize w(γ(T)). This is done by BT dynamic programming with the following weight function ω. ω(U) = 2|γ(U)| ω(U) = 2|γ(U)| - 1 Then, we minimalize those tree-decompositions and collect the bags of those minimalized tree-decompositions. § SAFE SEPARATORS Bodlaender and Koster <cit.> introduced the notion of safe separators for treewidth. Let S be a separator of a graph G. We say that S is safe for treewidth, or simply safe, if (G) = (G ∪ K(S)). As every tree-decomposition of G ∪ K(S) must have a bag containing S, (G) is the larger of |S| - 1 and max{(G[C ∪ N_G(C)] ∪ K(N_G(C))}, where C ranges over all the components of G ∖ S. Thus, the task of computing (G) reduces to the task of computing (G[C ∪ N_G(C)] ∪ K(N_G(C))} for every component C of G ∖ S. The motivation for looking at safe separators of a graph is that there are sufficient conditions for a separator being safe and those sufficient conditions lead to an effective preprocessing method for treewidth computation. We use the following two sufficient conditions. A vertex set S of G is an almost-clique if S ∖{v} is a clique for some v ∈ S. Let R be a vertex set of G. A contractor γ of G is rooted on R if, for each part C of γ, |C ∩ R| = 1. Bodlaender and Koster <cit.> * If S is an almost-clique minimal separator of G, then S is safe. * Let lb be a lower bound on (G). Let C ⊆ V(G) be connected and let S = N_G(C). Suppose (1) (G[C ∪ S] ∪ K(S)) ≤ lb and (2) G[C ∪ S] has a contractor γ rooted on S such that G[C ∪ S] / γ is a complete graph. Then, S is safe. We use safe separators both for preprocessing and during recursion. For preprocessing, we follow the approach of <cit.>: to preprocess G, we fix a minimal triangulation H of G and test the sufficient conditions in the theorem for each minimal separator of H. Since deciding if the second condition holds is NP-complete, we use a heuristic procedure. Let be the set of all minimal separators of H that are confirmed to satisfy the first or the second condition of the theorem. Let be a tree-decomposition of G that uses all separators of but no other separators. Then, is what is called a safe-separator decomposition in <cit.>. A tree-decomposition of G of width (G) can be obtained from by replacing each bag X of by a tree-decomposition of G[X] ∪⋃_C ∈(G ∖ X) K(N_G(C)), the graph obtained from the subgraph of G induced by X by filling the neighborhood of every component of G ∖ X into a clique. Safe separators are also useful during the recursive computation. Given G, we wish to find a contractor γ of G such that (G / γ) = (G), so that we can safely recurse on G / γ. The second sufficient condition in Theorem <ref> is useful for this purpose. Let C, S, and γ be as in the condition. We construct γ' such that (G / γ') = (G) as follows. The proof of this sufficient condition is based on the fact that we get a clique on S when we apply the contractor γ on G[C ∪ S]. Thus, we may define a contractor γ' on G such that G / γ' = (G ∖ C) ∪ K(S). As each tree-decomposition of (G / γ) can be extended to a tree-decomposition of G, using the tree-decomposition of G[C ∪ S] ∪ K(S) of width at most lb ≤(G), we have (G / γ') = (G) as desired. When the recursive call on (G / γ') returns a certificate Π⊆Π(G / γ') such that _Π(G / γ') ≤ k, we need to "uncontract" Π into a Π' ⊆Π(G) such that _Π'(G) ≤ k. Fortunately, this can be done without invoking the general uncontraction procedure. Observe first that each PMC in Π naturally corresponds to a PMC of (G ∖ C) ∪ K(S), which in turn corresponds to a PMC of G contained in V(G) ∖ C. Let Π_1 be the set of those PMCs of G to which a PMC in Π corresponds in that manner. Let Π_2 ⊆Π(G[C ∪ S] ∪ K(S)) be such that _Π_2(G[C ∪ S] ∪ K(S)) ≤ lb. Similarly as above, each PMC of Π_2 corresponds to a PMC of G contained C ∪ S. Let Π'_2 denote the set of those PMCs of G to which a PMC in Π_2 corresponds. As argued above, a tree-decomposition in _Π((G ∖ C) ∪ K(S)) of (G ∖ C) ∪ K(S) and a tree-decomposition in _Π_2(G[C ∪ S] ∪ K(S)) of G[C ∪ S] ∪ K(S) can be combined into a tree-decomposition belonging to _Π'_2(G) of width ≤ k. Thus, Π'_2 is a desired certificate for (G) ≤ k. § EDGE ORDERING We want an edge e such that (G / e) = (G), if any, to appear early in our edge order. Heuristic criteria for such an ordering have been studied in the classic work on contraction based lower bounds <cit.>. Our criterion is similar to those but differs in that it derives from a special case of safe separators. The following is simple corollary of Theorem <ref>. Let e = {u, v} be an edge of G and let S = N_G(v). Suppose S ∖{u} is a clique of G. Then, we have (G / e) = (G). If e satisfies the above condition, then we certainly put e first in the order. Otherwise, we evaluate e in terms of its closeness to this ideal situation. Define the deficiency of graph H, denoted by (H), to be the number of edges of its complement graph. For each ordered pair (u, v) of adjacent vertices of G, let _G(u, v) denote (G[N_G(v) ∪{v}] / {u, v}). Note that _G(u, v) = 0 means that the condition of the above proposition is satisfied with S = N_G(v). Thus, we regard e = {u, v} preferable if either _G(u, v) or _G(v, u) is small. We relativize the smallness with respect to the neighborhood size, so the value of edge e = {u, v} is min{defic_G(u, v) / |N_G(v)|, defic_G(v, u) / |N_G(u)|}. We order edges so that this value is non-decreasing. § SUPPRESSED EDGES Consider the recursive call on G / e from the call of RTW on G, where e is an edge of G. Suppose there is an ancestor call on G' such that G = G' / γ and edge e' of G' such that γ maps the ends of e' to the ends of e. If the call on G' / e' has been made and it is known that (G' / e') ≤ k then we know that (G / e) ≤ k, since G / e is a contraction of G' / e'. In this situation, we that e is suppressed by the pair (G', e'). We may omit the recursive call on G / e without compromising the correctness if e is suppressed. For efficiency, however, it is preferable to obtain the certificate Π⊆Π(G / e) for (G/ e) ≤ k and feed the uncontraction of Π to the HPID instance on G to help progress. Fortunately, this can be done without making the recursive call on G as follows. Suppose e is suppressed by (G', e') and let Π' ⊆Π(G' / e') such that _Π'(G' / e') ≤ k. Let γ' be the contractor of G' / e' such that G' / e'/ γ' = G / γ / e: such γ' is straightforward to obtain from γ. Letting Π = (Π, G' / e', γ'), we obtain Π⊆Π(G / e) such that _Π(G/ e) ≤ k. § EXPERIMENTS We have implemented RTW and evaluated it by experiments. The computing environment for our experiments is as follows. CPU: Intel Core i7-8700K, 3.70GHz; RAM: 64GB; Operating system: Windows 10Pro, 64bit; Programming language: Java 1.8; JVM: jre1.8.0_271. The maximum heap size is set to 60GB. The implementation uses a single thread except for additional threads that may be invoked for garbage collection by JVM. Our primary benchmark is the bonus instance set of the exact treewidth track of PACE 2017 algorithm implementation challenge <cit.>. This set, consisting of 100 instances, is intended to be a challenge for future implementations and, as a set, are hard for the winning solvers of the competition. Using the platform of the competition, about half of the instances took more than one hour to solve and 15 instances took more than a day or were not solvable at all. We have run our implementation on these instances with the timeout of 10000 seconds each. For comparison, we have run Tamaki's PID solver <cit.>, which is one of the PACE 2017 winners, available at <cit.> and his new solver <cit.> available at <cit.>. Figure <ref> summarizes the results on the bonus set. In contrast to PID solver which solves only 68 instances within the timeout, RTW solves 98 instances. Moreover, it solve 72 of them in 100 seconds and 92 of them in 1000 seconds. Thus, we can say that our algorithm drastically extends the scope of practically solvable instances. Tamaki's new solver also quickly solves many instances that are hard for PID solver and is indeed faster then RTW on many instances. However, its performance in terms of the number of instances solvable in practical time is inferior to RTW. We have also run the solvers on the competition set of the exact treewidth track of PACE 2017. This set, consisting of 200 instances, is relatively easy and the two winning solvers of the competitions solved all of the instances within the allocated timeout of 30 minutes for each instance. Figure <ref> summarizes the results on the competition set. Somewhat expectedly, PID performs the best on this instance set. It solves almost all instances in 200 seconds for each instance, while RTW fails to do so on about 30 instances. There are two instances that RTW fails to solve in 10000 seconds and one instance it fails to solve at all. Tamaki's new solver shows more weakness on this set, failing to sove about 50 instances in the timeout of 10000 seconds. These results seem to suggest that RTW and PID should probably complement each other in a practical treewidth solver. § CONCLUSIONS AND FUTURE WORK We developed a treewdith algorithm RTW that works recursively on contractions. Experiments show that our implementation solves many instances in practical time that are hard to solve for previously published solvers. RTW, however, does not perform well on some instances that are easy for conventional solvers such as PID. A quick compromise would be to run PID first with an affordable timeout and use RTW only when it fails. It would be, however, interesting and potentially fruitful to closely examine those instances that are easy for PID and hard for RTW and, based on such observations, to look for a unified algorithm that avoids the present weakness of RTW.
http://arxiv.org/abs/2307.02735v2
20230706024253
Decomposing Triple-Differences Regression under Staggered Adoption
[ "Anton Strezhnev" ]
stat.ME
[ "stat.ME" ]
0 Anton StrezhnevAssistant Professor, University of Chicago Department of Political Science. Email: mailto:astrezhnev@uchicago.eduastrezhnev@uchicago.edu. I thank Andy Eggers, Justin Grimmer, Bobby Gulotty, Silvia Kim, Apoorva Lal, Molly Offer-Westort, Miguel Rueda, Yiqing Xu and Arthur Yu, as well as participants at the Stanford Political Science Methods Workshop, the NYU-AD Theory in Methods Workshop, and the 2023 American Causal Inference Conference for helpful discussions and comments. ================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================ Anton StrezhnevAssistant Professor, University of Chicago Department of Political Science. Email: mailto:astrezhnev@uchicago.eduastrezhnev@uchicago.edu. I thank Andy Eggers, Justin Grimmer, Bobby Gulotty, Silvia Kim, Apoorva Lal, Molly Offer-Westort, Miguel Rueda, Yiqing Xu and Arthur Yu, as well as participants at the Stanford Political Science Methods Workshop, the NYU-AD Theory in Methods Workshop, and the 2023 American Causal Inference Conference for helpful discussions and comments. ================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================ [1]Title PageTitle Page empty 1 The triple-differences (TD) design is a popular identification strategy for causal effects in settings where researchers do not believe the parallel trends assumption of conventional difference-in-differences (DiD) is satisfied. TD designs augment the conventional 2x2 DiD with a “placebo" stratum – observations that are nested in the same units and time periods but are known to be entirely unaffected by the treatment. However, many TD applications go beyond this simple 2x2x2 and use observations on many units in many “placebo" strata across multiple time periods. A popular estimator for this setting is the triple-differences regression (TDR) fixed-effects estimator – an extension of the common “two-way fixed effects" estimator for DiD. This paper decomposes the TDR estimator into its component two-group/two-period/two-strata triple-differences and illustrates how interpreting this parameter causally in settings with arbitrary staggered adoption requires strong effect homogeneity assumptions as many placebo DiDs incorporate observations under treatment. The decomposition clarifies the implied identifying variation behind the triple-differences regression estimator and suggests researchers should be cautious when implementing these estimators in settings more complex than the 2x2x2 case. Alternative approaches that only incorporate “clean placebos" such as direct imputation of the counterfactual may be more appropriate. The paper concludes by demonstrating the utility of this imputation estimator in an application of the “gravity model" to the estimation of the effect of the WTO/GATT on international trade. 1.5 § INTRODUCTION Despite its growing popularity in applied work, the triple-differences design remains understudied. While differences-in-differences (DiD) designs are ubiquitous in applied causal research, their validity hinges on an assumption of “parallel counterfactual trends" among treated and control groups. In the simple 2x2 setting with two time periods and a treated and control group, this assumption states that the treated group would have followed the same trend over time as the control group had the treated group instead received control. This may not hold if the types of units in the treated group are differentially exposed to some time-dependent shock. The triple-differences estimator attempts to address violations of this assumption by incorporating an additional difference-in-differences term that captures these shocks. In the simplest 2x2x2 setting, this can be understood as a difference between a “primary" difference-in-differences and a “placebo" difference-in-differences where the placebo DiD is constructed using observations that retain the same structure as the primary DiD but are known to be unaffected by the treatment. For example, gruber1994incidence, noted as one of the first studies to explicitly use a triple-differences strategy by olden2020triple, examines the effect of state-mandated maternity benefits on labor market outcomes. The primary analysis uses a conventional difference-in-differences design, examining outcomes among individuals at risk of having a child across different states over time. Some states expanded insurance coverage mandates for maternity care while others did not. This primary analysis is augmented by a second, placebo, difference-in-difference which leveraged the fact that other individuals who were unaffected by the treatment could be observed in the same states and same time periods – individuals who are known to be incapable of becoming pregnant. Under the assumption that the violation of the parallel trends assumption in the “primary" stratum (individuals considered at-risk of childbirth) is equivalent to the parallel trends violation in the “placebo" stratum (individuals not at risk), subtracting the placebo difference-in-difference estimate from the primary difference-in-difference identifies an average treatment effect on the treated. Triple-difference designs leverage a structural feature common to many datasets where units may belong to multiple overlapping sub-groups that differ in their exposure to treatment.[Another distinct use of a “triple-differences" approach involves incorporating additional pre-treatment periods in a conventional differences-in-differences setting as in egami2023using. This paper does not address this estimator.] In the case of gruber1994incidence individuals are nested within both state and “risk-for-pregnancy" groupings. In this case, the second grouping is binary and is known to never receive the treatment. A similar approach is seen in gingerich2019ballot which examines the effect of the staggered introduction of the secret ballot on voter behavior in elections for Brazil's Chamber of Deputies. Here, the placebo stratum consists of elections to the Senate for which the ballot reforms had already been uniformly implemented. Likewise, agan2018ban study the effect of “ban the box" policies – policies that prevent employers from asking prospective employees about criminal records – on racial discrimination. Here, the triple-difference “placebo" group consists of those employers that never asked about criminal records even prior to the ban and are therefore plausibly unaffected by changes in the policy. However, not all triple-differences designs restrict themselves to a single treated stratum and a single placebo stratum. A common triple-differences setting examines outcomes observed at the individual or firm level where the individual or firm is located in a particular state and also belongs to a particular industry. For example marchingiglio2019employment uses a triple-differences design to estimate the effect of gender-specific minimum wage laws in the early 20th century United States. These laws were implemented at the state level and typically targeted specific industries that employed a larger share of women. As a result, treatment adoption is jointly determined by both the state and their industry of employment. Unlike the case of the binary placebo grouping in gruber1994incidence and agan2018ban where all but the first group are entirely unaffected by treatment, different industries are exposed to treatment at different times and across different states. In such a design, each industry contains its own, separate, staggered difference-in-difference design where the treatment assignment varies across state and time.[Equivalently, one can consider separate, state-specific difference-in-differences designs that leverage treatment variation across industry and time – the designation of which dimension determines the different strata is arbitrary.] In this more general staggered triple-differences setting, where the times at which treatment is initiated can vary across both of the overlapping sub-groups, researchers have typically relied on the “triple-differences regression" (TDR) specification to estimate a single summary treatment effect parameter. This is a “static" fixed effects regression with a unit-specific intercept and separate group-time fixed effects for each of the sub-groups. Notably, this regression specification appears in many settings that do not explicitly defend a triple-differences identification strategy. For example, the canonical gravity model regression (<cit.>; <cit.>) in research on international trade has precisely this structure with each unit i consisting of a single dyad with a sender (or exporter) s and receiver (or importer) r. Such models are frequently used to study the effects of particular dyadic-level interventions, such as trade agreements or border restrictions, on trade flows (e.g. <cit.>; <cit.>). In the trade setting, this regression models the outcome Y_it for dyad i at time t using a three-way fixed effects specification: Y_it = τ D_it + α_s(i) r(i) + γ_s(i),t + δ_r(i), t + ϵ_it where D_it is an indicator for whether dyad i is under treatment at time t, s(i) denotes the sender country associated with dyad i and r(i) denotes the receiver country associated with dyad i. Researchers interpret estimates of τ as an estimate of some average treatment effect. In the canonical 2x2x2 setting with a treated group and a control group, two time periods, and two overlapping strata, the coefficient on τ̂ is equivalent to the simple triple-differences estimator of the average treatment effect on the treated olden2020triple. However, it remains unclear whether this interpretation is valid under a more general data structure that allows for staggering in the adoption of treatment not only across time but also across each of the overlapping strata. Recent work on the two-way fixed effects (TWFE) estimator has shown that when treatment roll-out is staggered, the two-way fixed effects estimator can be biased for the average treatment effect on the treated even if parallel trends holds, unless additional strong effect homogeneity assumptions are made (<cit.>; <cit.>). This paper shows that similar problems arise when using the static triple-difference regression estimator under common staggered adoption designs. It develops a decomposition the style of goodman2021difference for this estimator and shows that the regression coefficient on the treatment can be partially decomposed into an average over 2x2x2 triple-difference terms. In each of these 2x2x2s, the first difference consists of the difference between the outcome of a unit under treatment and another unit under control within a given time period and stratum. The second difference is the difference in the outcome between those same units in the same stratum but in a time period where both are under the same treatment status (both control or both treated). These first two differences constitute the “primary" 2x2 DiDs. As in the case of TWFE, similar problems of “forbidden" comparisons arise due to the use of treated units in this second difference term (<cit.>; <cit.>). The third difference in the 2x2x2 is between a “primary" 2x2 DiD term and another 2x2 “placebo" DiD involving the same units and time periods but in a different stratum with a different distribution of treatment. In the case of the two-stratum treated/placebo triple-difference [e.g.][]gruber1994incidence, this placebo will always be a 2x2 where all unit-time periods are under control as the only other stratum that could be matched to a primary DiD never receives treatment. These are valid placebos even under effect heterogeneity as they incorporate no treated units. However, when treatment can be staggered arbitrarily, the placebo can also consist of 2x2 comparisons in other strata where some or all of the units are under treatment. These placebos are invalid in that they identify a combination of the bias in the primary DiD due to the violation of parallel trends and differences in treatment effects across units, time periods and strata. Notably, some of the primary DiD terms can themselves also act as placebos for other DiDs. Moreover, the absence of treatment staggering within stratum does not suffice to eliminate all invalid terms as it does in the two-way fixed effects setting. Even if treatment is not staggered within any single stratum, if the staggering differs across strata such that the treatment initiation times vary by strata or if some units that are untreated in one stratum are treated in another[The one exception to this is the case of a “pure placebo" stratum where no units are treated.] then the triple differences regression will remain be biased for a weighted average of treatment effects even when a stronger identifying assumption – parallel trends – holds. This paper builds on the recent developments in the identification and estimation of average treatment effects under difference-in-differences design assumptions in settings where treatment adoption is staggered over time (e.g. <cit.>; <cit.>; <cit.>; <cit.>; <cit.>; <cit.>; <cit.>). These closely related papers have all highlighted how commonly-used “two-way fixed effects" estimators fail to identify average treatment effects when not all units that adopt treatment adopt it at the same time when parallel trends assumptions hold but constant effects assumptions do not. This paper is closest in motivation to goodman2021difference which focuses on the static “two-way fixed effects" estimator and provides an explicit decomposition of this estimator in terms of the underlying 2x2 differences-in-differences that comprise it. This decomposition provides additional intuition for the source of bias in TWFE beyond the general problem of “negative weights" and illustrates the conditions under which the bias due to invalid or “forbidden" comparisons is likely to be large or small. Likewise, while existing work on differences-in-differences under staggered adoption has noted that “negative weights" are also likely to be a problem in the triple-differences borusyak2021revisiting, there has not been an explicit characterization of the source of the bias nor a discussion of what factors will accentuate it. The decomposition in this paper provides an answer to both of these questions. The remainder of the paper is organized as follows. Section <ref> sets out a general framework for defining target causal estimands within a staggered triple-differences design. It extends the “group-time" ATT framework of callaway2021difference and clarifies the necessary assumptions under which a stratum-specific group-time ATT can be identified non-parametrically by a 2x2x2 triple-difference. It provides an alternative interpretation of the triple-differences identifying assumption from olden2020triple that generalizes to the setting with many placebo strata and arbitrary treatment staggering. Specifically, the assumption allows for a violation of parallel trends between two treatment histories in one stratum, but requires that the violation is constant across matching units and time periods in other strata. Section <ref> presents the central contribution of this paper, a decomposition of the conventional triple-differences regression estimator into its component 2x2x2 comparisons. It shows how many of these 2x2x2 triple-difference comparisons are valid only under a strong constant treatment effects assumption as observations under treatment are used to estimate some of the “placebo" difference-in-difference terms. Finally, Section <ref> compares the conventional triple differences regression to an alternative approach based on direct imputation of the counterfactual by replicating the analysis in goldstein2007institutions of the effect of the WTO/GATT on bilateral trade. It suggests that existing studies in international trade that rely on the popular “gravity model" approach are likely biased for the average of group-time ATTs due to the presence of effect heterogeneity across time. Moreover, placebo tests using the imputation method suggest that the triple-differences identifying assumptions implied by the gravity model are likely violated in the case of WTO/GATT membership. The paper concludes with a discussion of how applied researchers should approach triple-differences regressions and what features of the data are likely to minimize or exacerbate the problems highlighted by the decomposition. § DEFINING THE TRIPLE-DIFFERENCES ESTIMAND This section begins by defining the causal estimands of interest in the staggered triple-differences setting, following the framework of callaway2021difference for staggered differences-in-differences. Consider a set of T time periods denoted by t = 1, 2, …, T. For each unit i we observe both an indicator for whether that unit is under treatment at time t, denoted D_it, and an outcome denoted Y_it. Define a unit's treatment history as the vector of all treatment assignments from time t=1 to t=T: D⃗_⃗i⃗ = {D_i1, D_i2, …, D_iT}. Under staggered adoption, all units are under control at time t=1 and can enter into treatment at any point between t = 2 through t=T. I assume that units entering treatment do not exit treatment (Assumption 1 of callaway2021difference): No treatment reversal D_i1 = 0 for all i. D_it-1 = 1 implies D_it = 1 In the triple-difference design, observations are nested within two groupings denoted s ∈{1, 2, …, S} and r ∈{1, 2, …, R}. For example, firms i can be nested within states s and industries r or in studies of international trade, directed dyads can be defined by their sender s and receiver r combination. In the directed-dyad case, each unique combination of s and r identifies a single observation while in most other other settings, multiple units can be grouped under the same s and r. Let s(i) denote the function that returns the grouping s to which unit i belongs and r(i) the function that returns the grouping r. In the standard triple-difference setting, I assume that all units with the same s and r have the same treatment history. No treatment variation conditional on s and r For any two units i, j, if s(i) = s(j) and r(i) = r(j), then D⃗_⃗i⃗ = D⃗_⃗j⃗ Following the approach in callaway2021difference, sun2021estimating and borusyak2021revisiting one can summarize each unit's “treatment history" under the no-reversal assumption using a single scalar: the treatment initiation time. Let G_i denote the time period when unit i first initiates treatment. For units that remain under control for all time periods – the “never-treated" units - let G_i = ∞. Define potential outcomes Y_it(g) as a function of the treatment initiation time.[The potential outcomes can be more generally defined in terms of the entire treatment history vector d⃗ = {d_1, d_2, …, d_T}. However, in a staggered adoption design, the entire treatment history is summarized by the initiation time.] I make a standard SUTVA/Consistency assumption that the observed outcome Y_it for a unit that initiates treatment at G_i = g is equal to the potential outcome Y_it(d⃗). Consistency/SUTVA Y_it = Y_it(g) if G_i = g While any unique combination of s and r might contain multiple observations indexed by i, when treatment is assigned only at the level of the joint grouping, it is also helpful to consider defining potential outcomes at the level of each unique s, r and t combination.[A similar question of aggregation and level of analysis arises when defining treatment effects for cluster-randomized experiments and selecting between analyzing either individual or the cluster-averaged analyses su2021model.] Let Y_srt = 1/N_srt∑_i: s(i) = s, r(i) = r Y_it where N_sr denotes the number of units with s(i) = s and r(i) = r. In some settings, such as gravity models, each combination of s and r will uniquely identify a single observation while in others, multiple units may belong to a common s and r. Because all of these units will have the same treatment history, it is still useful to consider them as a single observation. Under Assumption <ref> define the treatment initiation time for all units i: s(i) = s, r(i) = r as G_sr. The joint-grouping potential outcome at time t can be written as Y_srt(g), which is connected to the observed outcomes by Assumptions <ref> and <ref>. Consistency/SUTVA of aggregated outcomes Y_srt = Y_srt(g) if G_sr = g For the remainder of this paper, I will work exclusively with the aggregated outcomes and treatment history: Y_srt and G_sr and will refer to the grouping defined by s as the “unit" of analysis and the grouping defined by r as the “stratum." However, this designation is purely for convenience and one can consider either the grouping denoted s or the grouping denoted r as the “stratum" grouping. In a typical two-period design, there is only one causal estimand: the average treatment effect on the treated in time period 2. However, under staggered adoption there are many possible effects that can be identified. callaway2021difference define the group-time ATT as the building block of all causal quantities of interest under staggered adoption.[sun2021estimating define the same quantity but call it the cohort average treatment effect on the treated or CATT.] This quantity corresponds to the average treatment effect at some time period t among units that initiate treatment at time g. In a triple-differences setting with multiple strata, it is necessary to refine the group-time ATT further by conditioning on the stratum r. The conditional group-time ATT is defined as: Conditional group-time ATT ATT_r(g,t) = 𝔼[Y_srt(g) - Y_srt(∞) | G_sr = g] This represents the average difference at time t between the observed outcome among units in stratum r that initiate treatment at time g and the counterfactual outcome that would have been observed had those units instead never received treatment. From this building block, one can define aggregate quantities that summarize the conditional group-time ATT across different treatment initiation times g, observation times t and strata r. The choice of how to aggregate depends on a researcher's ultimate quantity of interest callaway2021difference. Identifying a given conditional group-time ATT requires additional assumptions on the potential outcomes. First, as in the conventional difference-in-differences setting, the presence of “anticipation" effects is ruled out such that altering treatment in the future does not change the potential outcomes of a unit in the past. No anticipation For all t < g, Y_srt(g) = Y_srt(∞) This assumption states that the potential outcome observed at time t for a unit that initiates treatment at g would be the same as the potential outcome at time t had that unit instead never initiated treatment as long as t is prior to g. Combined with Assumption <ref> it implies that the observed outcome Y_srt equals the potential outcome Y_srt(∞) for any unit-time combination under control D_srt = 0. Moreover, ATT_r(g, t) = 0 for any t < g. Note that is possible to weaken this assumption to allow for limited anticipation up to a known number of periods, following callaway2021difference. I first consider identification under the conventional (conditional) parallel trends assumption conditioning on stratum r. Conditional parallel trends For all r, t ≠ t^', g ≠ g^': 𝔼[Y_srt(∞) - Y_srt^'(∞) | G_sr = g] - 𝔼[Y_srt(∞) - Y_srt^'(∞) | G_sr = g^'] = 0 This version of the assumption is closest to the general parallel trends assumption in borusyak2021revisiting and also discussed in roth2022what. It assumes that parallel trends hold across all time periods and across all treatment groups. Other definitions of the parallel trends assumption weaken this further by assuming parallel trends holds only with respect to the never-treated group G_sr = ∞ or restricting the time period to only a single pre-treatment period callaway2021difference. In the multi-strata setting, conditional parallel trends also only needs to hold for those strata r for which there exist any treated units as there are no group-time ATTs for strata that are pure placebos - that is, those strata r which for all s, G_sr = ∞. Under Assumption <ref>, identification of any conditional group-time ATT is straightforward. Each stratum that contains any treated units is effectively its own staggered difference-in-difference. Identification under conditional parallel trends Under assumptions <ref>, <ref>, <ref>, <ref> and <ref> ATT_r(g,t) = 𝔼[Y_srt - Y_srt^* | G_sr = g] - 𝔼[Y_srt - Y_srt^* | G_sr = g^'] for any g ≤ t, t^* < g, g^' > t The proof follows from the results of callaway2021difference conditioning on stratum r. However, when Assumption <ref> does not hold, researchers may nevertheless be able to identify the treatment effect by making an alternative identifying assumption and using a triple-differences design. In the triple-differences setting, researchers instead assume that while conditional parallel trends may be violated, that violation is constant across strata. Constant violation of conditional parallel trends For all r ≠ r^', t ≠ t^', g ≠ g^': 𝔼[Y_srt(∞) - Y_srt^'(∞) | G_sr = g] - 𝔼[Y_srt(∞) - Y_srt^'(∞) | G_sr = g^'] = 𝔼[Y_sr^'t(∞) - Y_sr^'t^'(∞) | G_sr = g] - 𝔼[Y_sr^'t(∞) - Y_sr^'t^'(∞) | G_sr = g^'] This generalizes the identification assumptions from olden2020triple to settings with multiple strata where the treatment can be arbitrarily staggered in different strata. It is worth noting that Assumption <ref> is not weaker than Assumption <ref> as the former places restrictions on the control potential outcomes in a stratum r^' conditional on treatment assigned in stratum r irrespective of the treatment distribution in r^'. Therefore it is possible for parallel trends to hold within any strata that receive treatment but for this assumption to fail due to a parallel trends violation in a “pure placebo" stratum in which no unit receives treatment. Under this assumption, one can identify the conditional group-time ATT in stratum r from the observed data by appending a second difference-in-differences to the result from <ref> Identification under a constant violation of conditional parallel trends Under assumptions <ref>, <ref>, <ref>, <ref> and <ref> ATT_r(g,t) = {𝔼[Y_srt - Y_srt^* | G_sr = g] - 𝔼[Y_srt - Y_srt^* | G_sr = g^']} - {𝔼[Y_sr^'t - Y_sr^'t^* | G_sr = g, G_sr^' > t ] - 𝔼[Y_sr^'t - Y_sr^'t^* | G_sr = g^', G_sr^' > t]} for any g ≤ t, t^* < g, g^' > t The second difference-in-difference is comprised of those observations in stratum r^' on the same units s that are under control (G_sr^' > t) at both time t and time t^*. Notably, the identification result conditions on both the observed treatment in r and r^' in selecting valid units for the “placebo" difference-in-differences. Plugging in sample analogues yields a consistent estimator for each conditional group-time ATT. Crucially, all of the observations used as part of the second difference in the primary DiD and the placebo DiD are under control even if they may initiate treatment in the future. In practice, there may be very few observations available to estimate any individual ATT_r(g,t) as is the case in conventional staggered DiD. Moreover, researchers are unlikely to be interested in a single, specific ATT_r(g,t) but will instead target an “average" effect for the sample. Following the approach of callaway2021difference, the conditional group-time ATTs can be aggregated using researcher-specified weights into a single treatment effect summary. For example, one could consider estimands that take the form of a weighted average across the non-zero stratum-specific group-time ATTs ATT = ∑_r=1^R ∑_g=2^T ∑_t=g^T ATT_r(g,t) w_rgt where w_rgt denotes the weight assigned to the group-time ATT for group g at time t in stratum r. The choice of weights reflects researcher preferences over how to aggregate heterogeneous effects across different types of units (e.g. late versus early adopters) and over time (e.g. instantaneous versus longer-term effects). § DECOMPOSING TRIPLE-DIFFERENCES REGRESSION With a clearer understanding of the triple-differences estimand, it is worth asking whether the commonly-used triple-differences regression estimator identifies any weighted average of stratum-specific group-time ATTs and what additional assumptions are required. Recent results decomposing the two-way fixed effects estimator goodman2021difference note that treatment effect heterogeneity results in bias due to invalid difference-in-differences comparisons. Intuitively, the TWFE estimator incorporates both valid 2x2 differences-in-differences - using past time periods when treated and control units were under control to de-bias the cross-sectional first difference - as well as invalid differences-in-differences - using future time periods when treated and control units are both exposed to treatment. These “invalid" terms appear when treatment adoption is staggered and late adopters act as controls for early adopters in earlier time periods. This section examines whether similar problems arise with the triple-differences regression by developing a decomposition of the estimator into its component 2x2x2 triple-difference comparisons. It shows that bias arises due to both invalid primary DiDs – as in the two-way fixed effects case – as well as invalid “placebo" DiDs – a feature unique to the triple-differences regression. As noted in olden2020triple, the most widely used regression triple-differences specification (equation <ref>) appears to come from a discussion of yelowitz1995medicaid in angrist2008mostly - a study of Medicaid expansion across states. In this section, I decompose a slightly different specification in which the outcome is aggregated among observations in the same s and r groupings. The specification regresses the aggregated outcome on an indicator for whether that unit-stratum is under treatment at time t (D_srt) and three sets of fixed effects parameters: α_sr - the joint unit-stratum fixed effects, and γ_st - the unit-time fixed effects, and δ_rt - the stratum-time fixed effects. I assume a balanced panel with no missing observations. Y_srt = τ D_srt + α_sr + γ_st + δrt + ϵ_srt As discussed in the previous section, when there is more than one observation in a single s,r,t cell, it can be shown that a weighted version of this regression where the weights on each observation are proportional to N_sr yields an estimated τ̂ equivalent to that of Equation <ref>). In other words, the primary differences between <ref> and <ref> are in the different weights placed on each group-time treatment effect. The decomposition relies on the application of the Frisch-Waugh-Lovell theorem to obtain an expression for the OLS estimator of τ, τ̂ in terms of different averages of Y across the groupings of the data: s, r, and t (<cit.>; <cit.>). First, define the one-way, two-way and grand means of Y: Y̅_sr ≡1/T∑_t^'=1^T Y_srt^' Y̅_st ≡1/R∑_r^'=1^R Y_sr^'t Y̅_rt ≡1/S∑_s^'=1^S Y_s^'rt Y̅̅̅_s ≡1/RT∑_t^'=1^T ∑_r^'=1^R Y_sr^'t^' Y̅̅̅_r ≡1/ST∑_t^'=1^T ∑_s^'=1^S Y_s^'rt^' Y̅̅̅_t ≡1/SR∑_s^'=1^S ∑_r^'=1^R Y_s^'rt^' Y̅̅̅̅̅̅̅ ≡1/SRT∑_s^'=1^S ∑_r^'=1^R ∑_t^'=1^T Y_s^'r^'t^' Define the grand means over the treatment indicator D_srt analogously. The decomposition starts by applying the Frisch-Waugh-Lovell theorem and re-arranging the sums to write the OLS estimator τ̂ in terms of an average of the observed outcome Y_srt in all treated unit-stratum-times D_srt = 1 that has been triple de-meaned. The OLS estimator τ̂ in the grouped triple-differences regression can be written as: τ̂ = ∑_s=1^S ∑_r=1^R ∑_t:D_srt = 1 Y_srt - Y̅_sr - Y̅_st - Y̅_rt + Y̅̅̅_s + Y̅̅̅_r + Y̅̅̅_t - Y̅̅̅̅̅̅̅/∑_s=1^S ∑_r=1^R ∑_t=1^T (D_srt - D̅_sr - D̅_st - D̅_rt + D̅̅̅_s + D̅̅̅_r + D̅̅̅_t - D̅̅̅̅̅̅̅)^2 The terms in the numerator can be further rearranged by expanding out the double sums and obtaining an expression for τ̂ in terms of a weighted average of 2x2x2 comparisons. τ̂ = ∑_r=1^R ∑_s=1^S ∑_t=1^T ∑_s^'=1^S ∑_t^'=1^T ∑_r^'≠ r D_srt[(Y_srt - Y_srt^' - Y_s^'rt + Y_s^'rt^') - (Y_sr^'t - Y_sr^'t^' - Y_s^'r^'t + Y_s^'r^'t^')]/ SRT ∑_s=1^S ∑_r=1^R ∑_t=1^T (D_srt - D̅_sr - D̅_st - D̅_rt + D̅̅̅_s + D̅̅̅_r + D̅̅̅_t - D̅̅̅̅̅̅̅)^2 In this expression, Y_srt - Y_srt^' - Y_s^'rt + Y_s^'rt^' is a 2x2 difference-in-difference within stratum r (between unit s and s^' and time periods t and t^') while Y_sr^'t - Y_sr^'t^' - Y_s^'r^'t + Y_s^'r^'t^' is the same difference-in-difference comparison in stratum r^'. Intuitively, the triple-differences regression yields an average over every such triple-difference where the first observation (Y_srt) is treated. However, the remaining seven observations are not guaranteed to be controls, resulting in invalid triple-difference terms which drive the bias due to heterogeneous effects. Enumerating every possible combination of treatment and control for the remaining seven terms yields the full triple-differences decomposition in Theorem <ref>. Regression triple-differences decomposition Let D_srt^(0)≡ (1 - D_srt) Define the difference-in-difference in stratum r, Ỹ_srt^(s^'t^') as Ỹ_srt^(s^'r^'t^') ≡ Y_srt - Y_s^'rt - Y_srt^' + Y_s^'rt^' The regression triple-difference estimator can be written as: τ̂ = ω^-1∑_r=1^R ∑_s=1^S ∑_t=1^T ∑_s^'=1^S ∑_t^'=1^T ∑_r^'≠ r[Ỹ_srt^(s^'t^') - Ỹ_sr^' t^(s^'t^')] ×[D_srtD_s^'rt^(0)D_srt^'^(0)D_s^'rt^'^(0)_valid primary DiD + D_srtD_s^'rt^(0)D_srt^'D_s^'rt^'_invalid primary DiD] × [D_sr^'t^(0)D_s^'r^'t^(0)D_sr^'t^'^(0)D_s^'r^'t^'^(0)_valid “placebo" DiD + D_sr^'t^(0)D_s^'r^'t^(0)D_sr^'t^'D_s^'r^'t^'_invalid“placebo" DiD + D_sr^'t^(0)D_s^'r^'tD_sr^'t^'^(0)D_s^'r^'t^'_invalid“placebo" DiD + D_sr^'tD_s^'r^'tD_sr^'t^'D_s^'r^'t^'_invalid“placebo" DiD + D_sr^'tD_s^'r^'tD_sr^'t^'^(0)D_s^'r^'t^'^(0)_invalid“placebo" DiD + D_sr^'tD_s^'r^'t^(0)D_sr^'t^'D_s^'r^'t^'^(0)_invalid“placebo" DiD] + [ Ỹ_srt^(s^'t^') - Ỹ_sr^' t^(s^'t^')]×[D_srtD_s^'rt^(0)D_srt^'^(0)D_s^'rt^'^(0)_valid primary DiD] ×[ D_sr^'t^(0)D_s^'r^'tD_sr^'t^'^(0)D_s^'r^'t^'^(0) + D_sr^'tD_s^'r^'tD_sr^'t^'D_s^'r^'t^'^(0)_matching “flipped" DiDs in r^'] + [ Ỹ_srt^(s^'t^') - Ỹ_sr^' t^(s^'t^')]×[D_srtD_s^'rt^(0)D_srt^'D_s^'rt^'_invalid primary DiD] ×[ D_sr^'t^(0)D_s^'r^'tD_sr^'t^'D_s^'r^'t^' + D_sr^'t^(0)D_s^'r^'t^(0)D_sr^'t^'D_s^'r^'t^'^(0)_matching “flipped" DiDs in r^'] where the normalizing constant ω is equivalent to ω≡ SRT(N^(1)) - SR∑_s=1^S∑_r=1^R (N_sr^(1))^2 - ST∑_s=1^S∑_t=1^T (N_st^(1))^2 - RT∑_r=1^R∑_t=1^T (N_rt^(1))^2 + T ∑_t=1^T (N_t^(1))^2 + R ∑_r=1^R (N_r^(1))^2 + S ∑_s=1^S (N_s^(1))^2 - (N^(1))^2 with N^(1) and subscripts denoting the number of treated units in a particular set of strata. N^(1)_sr = ∑_t^'=1^T D_srt^' N^(1)_st = ∑_r^'=1^R D_sr^'t N^(1)_rt = ∑_s^'=1^S D_s^'rt N^(1)_s = ∑_t^'=1^T ∑_r^' = 1^R D_sr^'t^' N^(1)_r = ∑_t^'=1^T ∑_s^' = 1^S D_s^'rt^' N^(1)_t = ∑_s^' = 1^S ∑_r^'=1^R D_s^'r^'t N^(1) = ∑_s^'=1^S ∑_r^' = 1^R ∑_t^'=1^T D_s^'r^'t^' Theorem <ref> makes clear that the regression triple differences estimator can be understood as a weighted average over two types of 2x2x2 terms: conventional triple-differences and triple-differences with invalid placebos. For each of these constituent terms, the treatment status of each unit-time-stratum is captured in eight treatment indicators: D_srt, D_s^' rt, D_srt^' and D_s^' r t^' for the first DiD in stratum r and D_s r^' t, D_s^' r^' t, D_sr^' t^' and D_s^' r^' t^' for the second placebo DiD in stratum r^'. Each of these defines a particular triple-differences comparison, but only one of the possible comparisons comprises a valid triple-difference term that identifies a particular group-time ATT. The first thing to note from the decomposition is that the same problems due to “forbidden" comparisons in the two-way fixed effects estimator also appear here (<cit.>; <cit.>; <cit.>). For each triple-difference, there are two types of primary DiD terms – one involves a treated observation D_srt = 1 and three control observations (D_s^' r t = 0, D_s r t^' = 0, D_s^' r t^' = 0) while the other involves a first difference between a treated observation D_srt = 1 and a cross-sectional control D_s^' rt = 0 subtracted from a second difference in time period t^' where both units s and s^' are under treatment (D_s r t^' = 1, D_s^' r t^' = 1). The first of these 2x2 comparisons identifies an ATT under parallel trends (Assumption <ref>) or identifies an ATT plus a bias term under the “constant violation of parallel trends" assumption (Assumption <ref>). The second 2x2 does not identify an ATT even under parallel trends without an additional constant effects assumption. This is because under staggered adoption, the second difference involves two observations in a time period t^' that follows t (t^'≥ G_sr, t^'≥ G_s^' r)) where both sr and s^' r have adopted treatment. While the no anticipation assumption (Assumption <ref>) ensures that ATT_r(G_sr, t^') = ATT_r(G_s^' r, t^') = 0 for t^' < G_sr, t^' < G_s^' r, it does not guarantee this for t^'≥ G_sr. As a consequence, the difference in observed outcomes at t^' between the two treated units incorporates a difference in two treatment effects, making it an invalid second difference term. All of the complications highlighted by goodman2021difference for TWFE are clearly still present in the triple-differences regression decomposition as well. However, in the triple-differences setting, the problem of invalid comparisons is considerably more acute as even when all primary difference-in-difference terms are valid, the placebo difference-in-differences may not be. Among the set of placebo DiD terms that can appear in a given triple-difference, only one identifies the bias in the primary DiD under the constant violation of parallel trends assumption alone. This second DiD involves exclusively control observations in stratum r^': D_sr^' t = D_s^' r^' t = D_s r^' t^' = D_s^' r^' t^' = 0. All other placebo terms incorporate some observations that are treated. Five of these invalid second-differences incorporate an even number of treated observations such that the treatment effect, if constant, cancels and only a bias term remains. One such invalid second-DiD uses four treated observations instead of four control observations: D_sr^' t = D_s^' r^' t = D_s r^' t^' = D_s^' r^' t^' = 1. Another consists of either two control units at time t (D_sr^' t = D_s^' r^' t = 0) and two treated units at t^' (D_sr^' t^' = D_s^' r^' t^' = 1) or two treated units at time t (D_sr^' t = D_s^' r^' t = 1) and two control units at time t^' (D_sr^' t^' = D_s^' r^' t^' = 0). The last involves one unit s that is treated at both t and t^' (D_s r^' t = D_s r^' t^' = 1) and another unit s^' that is under control at both times (D_s^' r^' t = D_s^' r^' t^' = 0) or vice-versa. Intuitively, all of these placebo DiDs fail to identify the common violation of parallel trends under unrestricted effect heterogeneity. This is again because units s r^' and unit s^' r^' likely differ in their treatment initiation time G_s^' r≠ G_s^' r^' and as a result, these placebo DiDs each involve differences in treatment effects across different timing groups and time periods that do not cancel out. For example, in the case where the placebo DiD includes only observations under treatment, unit s r^' may have initiated treatment earlier than s^' r^'. Therefore, the placebo DiD includes a difference between the the ATT of the earlier timing group versus the ATT of the later timing group at two different time periods. Unless these effects are equal, the triple difference will fail to identify an ATT even if the primary DiD is valid. Finally, four terms in the decomposition include second-differences with an odd number of treated observations – that is, placebo DiDs that also act as primary DiDs elsewhere in the decomposition. A DiD comparison in one stratum r is subtracted from a “flipped" difference-in-difference in stratum r^'. Under constant treatment effects, this triple-difference eliminates the common parallel trends violation across the two strata and yields a term equivalent to twice the treatment effect. However, under effect heterogeneity the violation of parallel trends cannot be disentangled from differences in the treatment effect between stratum r and r^'. As shown in the Appendix, these terms can also be re-written as “double-counted" difference-in-differences providing an alternative interpretation of the triple-differences regression as an average over both 2x2x2 triple-differences and 2x2 difference-in-differences. The decomposition in Theorem <ref> highlights how the magnitude of the bias in the conventional triple-differences regression under heterogeneous treatment effects depends both on the degree of effect heterogeneity and the extent of treatment staggering across strata. Under certain distributions of treatment, the fixed effects estimator may be comprised of more invalid triple-difference terms than valid ones. There are two useful implications of this result. First, even if there is no staggered adoption within each stratum, the triple-differences regression estimator will still be biased under heterogeneous treatment effects if the specific units that enter treatment vary across each stratum. This implies that there are settings where, if conditional parallel trends holds for all strata (Assumption <ref>), the triple-differences regression will be biased for an average of ATTs while the conventional two-way fixed effects regression will not be. Second, in settings where researchers have only a single placebo stratum and a single “potentially treated" stratum, there are no invalid placebo terms. This provides something of a silver lining in Theorem <ref> for the many triple-difference designs where researchers augment a conventional difference-in-difference using a pure placebo group. In these settings, researchers only need to be concerned about the already well-known issues with two-way fixed effects and invalid primary DiD terms as all of the placebo DiDs will exclusively incorporate observations under control. As noted in the extensions of borusyak2021revisiting, an alternative estimation strategy using direct imputation of the counterfactual can easily address the issues of invalid comparisons in the triple-differences design. Instead of estimating the treatment effect in the same model as the fixed-effects parameters, the imputation approach separates the task of modeling the potential outcomes under control and aggregating the imputed ATTs to obtain a causal quantity of interest. For the triple-differences setting, the counterfactual imputation estimator proceeds by estimating the triple-differences regression with three sets of fixed effects parameters among the observations observed under control (D_srt = 0). From this model, the method imputes the counterfactual under control for each unit/stratum/time period under treatment (D_srt = 1). Y_srt(∞) = α̂_̂ŝr̂^̂(̂0̂)̂ + γ̂_̂ŝt̂^̂(̂0̂)̂ + δ̂_̂r̂t̂^̂(̂0̂)̂ With the imputed counterfactuals, researchers can obtain an estimate of any stratum group-time treatment effect and any aggregation of the group-time treatment effects as in Equation <ref>. For a more extensive discussion of the fixed effects counterfactual imputation estimator as applied to the difference-in-differences setting, see liu2021practical. Inference in the triple-difference imputation setting is complicated by the presence of potentially complex error correlation structures. In the conventional difference-in-difference setting, researchers typically assume error correlation within unit over time but independence across units. For the fixed effects imputation estimator, this facilitates inference via either the cluster bootstrap liu2021practical or via an asymptotic approximation borusyak2021revisiting. While a single cluster error correlation structure is plausible in some triple-difference settings, such as when observations are firms nested within affected states and industries, for many triple-difference applications errors likely exhibit a two-way clustering structure where observations that share either the same s or the same r are correlated. In particular, when observations are dyadic – as in the “gravity model of trade" setting examined in Section <ref> – it is implausible to assume that dyads that share a common member are independent despite the fact that many applied studies proceed with this assumption (<cit.>; <cit.>). Rather, the clustering structure is “two-way" in that errors across dyads are correlated on both the “sender" dimension and the “receiver" dimension and failing to account for this will tend to understate the true sampling variance. While asymptotic variance estimators that are robust to two-way clustering exist for the conventional regression model cameron2011robust, an extension to the imputation estimator in the style of borusyak2021revisiting for single clustering is beyond the scope of this paper. Instead, this paper suggests the use of an extension of the cluster bootstrap to the two-way clustering setting: the “pigeonhole" bootstrap (<cit.>; <cit.>). This approach proceeds by resampling clusters along each of the clustering dimensions (e.g. resampling “sender" units and “receiver" units). The bootstrap weight assigned to each assigned to each dyad is the product of the bootstrap weights assigned to its sender and receiver. owen2012bootstrapping recommend a “Bayesian" version of the pigeonhole bootstrap in the style of rubin1981bayesian in which weights on each cluster are drawn independently from an Exponential(1) distribution. In the fixed-effects imputation setting this has a slight advantage over the conventional resampling bootstrap in that it does not assign a weight of 0 to any dyad. This ensures that the number of fixed effects parameters remains the same when re-estimating the model in each bootstrap iteration. Recent theoretical work has shown asympottic consistency of the pigeonhole bootstrap under multi-way clustering (<cit.>, <cit.>) and owen2012bootstrapping note that it is generally conservative for the true sampling variance. To maintain comparability with existing applied work, I use a conventional one-way cluster bootstrap for the application in Section <ref> but provide results under a two-way pigeonhole bootstrap in Appendix <ref>. § APPLICATION: GRAVITY MODELS AND THE EFFECT OF THE WTO ON TRADE Despite the central role that the General Agreement on Tariffs and Trade (GATT) and its successor, the World Trade Organization (WTO) have played in the negotiated multilateral reduction of tariffs and other barriers to trade in the post-WWII era, there exists an extensive empirical debate over whether there is any clear evidence that membership in the GATT/WTO actually increases trade between states. rose2004we found little evidence that pairs of countries that were members of the GATT/WTO saw increased bilateral trade. tomz2007we and goldstein2007institutions responded by arguing that Rose failed to account for states that were participants in the GATT/WTO system and thus benefited from trade concessions despite not being full members. Re-estimating Rose's regressions using an alternative treatment indicator, these papers found evidence of a positive treatment effect. An extensive literature has followed which refined the original specifications and, for the most part, has found evidence of a GATT/WTO effect.[See gil2016re for a review. However, also note recent work by esteve2020does which provides some evidence for the original null under an alternative estimation strategy.] There have been two primary econometric improvements over the original debate between Rose and Tomz, Goldstein and Rivers. The first is the use of correctly specified gravity models that incorporate terms that capture the unobserved heterogeneity common to sender and receiver that evolves over time. While tomz2007we and goldstein2007institutions used dyad and time fixed effects, the standard gravity model specification in the econometrics literature also includes an interaction between the time fixed effects and both sender and receiver fixed-effects to capture the “multilateral resistance" factors that are common to an exporter and importer across all trading relationships anderson2003gravity. This approach was adopted in subramanian2007wto in a re-analysis of rose2004we and subsequent work has relied on a baseline regression specification that takes the form of the following log-linearized equation estimated via OLS that models trade between sender country s and receiver country r in time t log(Imports_srt) = 𝐗_srt^'β + α_sr + γ_st + δ_rt + ϵ_srt where 𝐗_srt is a vector of covariates that vary across time and dyad and ϵ_srt is a mean-zero error term. As shown in this paper, with a single, binary X_srt, this corresponds exactly to the conventional triple-differences regression. This suggests that, implicitly, studies of policy effects on bilateral trade that implement a gravity equation as a baseline model for the outcome are relying on a form of Assumption <ref> to motivate their identification strategy. The second econometric critique, which is largely beyond the scope of this paper, concerns the use of OLS to estimate a log-linearized gravity model. silva2006log note that under heteroskedasticity, OLS estimates for the structural parameters of the log-linearized gravity model parameters will be biased due to Jensen's inequality. As an alternative, they recommend instead estimating a regression on the raw trade levels using a multiplicative model for the conditional expectation function with parameters estimated via Poisson Pseudo-maximum likelihood (PPML) with robust standard errors. wooldridge1999distribution shows that this is consistent for the CEF under correct specification of the conditional mean function even in the presence of misspecification in the error distribution through the use of the Poisson. The PPML estimator has the added benefit of being able to incorporate zeroes which the log-linearized estimator must either ignore or transform using some arbitrary additive constant. Recently esteve2020does show that applying the PPML approach instead of the log-linearized model to estimate the effect of the GATT/WTO results in null findings in line with Rose's original results. While this application will focus on the log-linearized fixed effects gravity model estimated using OLS, the re-interpretation of the gravity model in terms of a triple-differences design also helps shed some light on this second critique from silva2006log. First, it suggests that zeroes in the trade flow data are not simply an estimation concern but rather a challenge to the underlying assumptions of the research design. As ciani2018dif note, the use of a log-linearized or Poisson model implicitly assumes that trends are multiplicative rather than additive on the level of the raw outcome. In the standard difference-in-differences setting this corresponds to an assumption that the ratios of the potential outcomes under control rather than their differences are equivalent between treated and control. In such a design, the presence of zeroes will, by construction, result in a violation of this assumption as the multiplicative trend relative to a baseline of zero is infinite. Therefore irrespective of the choice of estimator (log-linearized or Poisson PML), it may be appropriate to remove observations with zero trade flows to avoid including observations for which the identifying assumptions will not hold. Second, it suggests that switching to a Poisson PML estimator alone will not resolve the problem of invalid triple-difference comparisons highlighted in this paper. While a full decomposition of the Poisson PML estimator is beyond the scope of this paper, recent work by wooldridge2021two for the difference-in-difference setting has noted that the conventional Poisson PML estimator with two-way fixed effects suffers from similar issues as TWFE OLS. However, simple corrections that allow for greater treatment effect heterogeneity (“extended" two-way fixed effects) can be applied to the PPML case as well as OLS. These estimators have an equivalent form to the imputation estimator in borusyak2021revisiting and therefore an imputation extension of the Poisson PML to the three-way fixed effects setting is plausible. This replication focuses on the differences between the conventional triple differences regression specification and the imputation estimator recommended by borusyak2021revisiting and liu2021practical as applied to the replication dataset from goldstein2007institutions. The original data consists of 381,656 dyad-year observations from 1946 to 2004. I restrict the analysis to 1946-2003 as the published dataset exhibits a puzzling decline in the total number of observations from 2003 to 2004. While WTO membership is largely a case of staggered adoption as there are very few states that revert from being members to non-members, there are three states which are considered early members that very quickly switch to being non-members: China, Lebanon and Syria. I drop the small number of observations where these states are considered treated (largely pre-1951). Among the WTO participants there are slightly more non-staggered cases as the definition of participation from tomz2007we includes a number of states that became de-facto participants after independence due to the participation of the GATT/WTO membership of their former colonial power. While many of these states remained in the system, a handful eventually dropped out as participants (and potentially rejoined in subsequent years). To retain the staggered adoption structure in the data, I drop observations from Vietnam pre-1956, Laos pre-1958, Guinea pre-1962, and Cambodia, Algeria and Yemen pre-1996. From the standpoint of the underlying design, pruning these observations is sensible as it is implausible to estimate treatment effects for these “treated" periods absent any pre-treatment observations and these periods are also not valid controls for other units in the data. After pre-processing, the dataset consists of 371,954 dyad-years with 163 unique countries and 58. Following tomz2007we, I code two dyadic treatment variables: joint membership if both members of a dyad are a member of the WTO in year t and joint participation if both members of a dyad are participants in year t. While a visualization of the treatment distribution at a dyadic level would be infeasible, I generate two treatment adoption plots liu2021practical at the monadic level to understand both the magnitude of the treatment staggering as well as the overall difference between membership and participation treatments. Figure <ref> plots the two distributions with states ordered by year of membership/participation respectively. The results show both significant missingness in the data as dyads with zero trade flows were omitted and many states do not exist for the entire time period under analysis. They also highlight substantial staggering in both treatments. There are comparatively few states in later years that are not WTO/GATT members. The scarcity of “pure control" observations also limits the ability of researchers to precisely estimate long-run effects, especially for the early members. Moreover, while it is clear that the distribution of participants is qualitatively different from members, this differences has consequences not only for treatment effect heterogeneity - as highlighted by tomz2007we and goldstein2007institutions - but also for the precision with which the treatment effects can be estimated. With many more treated units, there exist fewer control observations from which to impute. In general, I find the precision of estimates using joint participation as the treatment to be much greater than those using joint membership, especially when only imputing using control observations. I estimate the log-linearized three-way fixed effects regression model with log-imports as the outcome. Table <ref> reports the estimated average treatment effects from the “static" specification. For the conventional “fixed-effects regression" I report the estimated coefficient on the treatment indicator (either joint membership or joint participation) while for the imputation results, I impute the counterfactual for each treated dyad using the three-way fixed effects model fit to the controls and average over the difference between observed and imputed trade uniformly across all dyads and time periods. Standard errors are estimated using the Bayesian dyadic bootstrap, using a random weight to re-weighting each dyad while preserving intra-dyad correlations over time. This is consistent with the standard approach in much of the WTO/GATT effects literature which only clusters on dyad. However, this likely under-estimates the true sampling variance. Appendix <ref> presents these results with standard errors obtained via the two-way pigeonhole bootstrap and suggests that few of these estimated effects are actually statistically significant. Two features from the static results are readily apparent, the fixed-effects regression estimates are more attenuated towards zero but the standard errors are much smaller. This is a consequence of the use of more observations but also of the use of potentially invalid triple-difference comparisons. As noted in the DiD case by simulations in baker2022much, when treatment effects accumulate over time, two-way fixed effects estimators will tend to under-estimate the ATT. As it is unlikely that the effects of trade liberalization appear instantaneously over a single year, it is sensible to expect the effects of the WTO to be cumulative. Eliminating these invalid comparisons via the imputation estimator shifts the estimated ATT upwards by a factor of 3. Notably, this shift is considerably greater than the difference between the estimates for members versus participants, suggesting that questions of model specification and design are much more salient than distinctions in how the treatment is coded. But the imputation estimators come at a cost – a substantial increase in the variance resulting from the use of fewer relevant observations to impute the counterfactual and from an alternative weighting of the individual treatment effects. The imputation approach also facilitates the creation of “event-study" plots for the triple-differences setting. This allows researchers to not only assess whether the treatment effect is heterogeneous over time, but also to conduct “placebo tests" for violations of the identifying assumptions by leaving out each pre-treatment period and imputing the counterfactual from the remaining observations. Figure <ref> plots the estimated average treatment effect using the imputation approach for up to 10 periods post-treatment for both the joint membership and the joint participation treatment. It also implements the placebo test method of liu2021practical for up to 10 periods pre-treatment. For each pre-treatment lag, the placebo test assumes that all units actually initiated treatment that many periods earlier and estimates the imputation model using only never-treated dyads and the time periods prior to the lag. Using this model, it generates a counterfactual prediction for these pre-treatment periods. If the identifying assumptions hold, the difference between the observed outcome and the imputed outcome should be zero. Conversely, statistically significant pre-trends effects are evidence that the identifying assumptions may be invalid and that there may be a violation of Assumption <ref>. Focusing on the first 10 years before and after treatment, I find some evidence that the constant violation of parallel trends assumption is invalid in the case of WTO/GATT membership and participation. While the estimated ATTs one to four years post joint-membership are positive and statistically significant, the magnitudes are comparable to the placebo estimates for the period 1 to 5 years prior to entry. Analogous results appear for joint participation – the placebo effects are all positive and statistically significant. These placebos provide some evidence that states which enter the WTO are likely altering their behavior prior to both membership and participation specifically with respect to other WTO members. As a result, evidence for a short-run effect is particularly weak given the likelihood of anticipation in the design. However, expanding the number of post-treatment periods being considered, does suggest some long-run impact of WTO/GATT membership. In Figure <ref>, I extend the number of post-treatment years considered to 40. I find sizeable effects on log imports 20-30 years post-membership with magnitudes much larger than the placebo estimates. Similar patterns appear for participants, suggesting that there may be some evidence for an incremental effect of WTO/GATT membership over time as opposed to an instantaneous effect at the time of membership. However, these findings come with a few caveats. First, as shown in Appendix <ref>, these results are not robust to the use of two-way clustered standard errors. Second, these “time-since-treatment" effects are not necessarily directly comparable to one another as the composition of treatment cohorts differs under staggered adoption – for example, there are some treated states that only have 3 or 4 post-treatment periods and thus disappear from estimates of the ATT. For the long event study plot, effect estimates 40 years out can only incorporate treatment groups that entered into the WTO before 1963. The consequence is that inter-temporal heterogeneity in treatment effects may be conflated with variation across treatment timing groups. Changes in which states comprise each treatment timing group could explain the sudden drop in the estimated effect between 30 and 40 years post-treatment. Overall, the replication results suggest that any conclusions about the effect of the WTO/GATT on trade obtained from conventional gravity model regressions should be taken with a grain of salt. Even after addressing the biases of the static gravity model using an imputation estimator, there is strong evidence from the statistically significant placebo estimates that the underlying identifying assumptions are violated. The replication findings should lead trade scholars to exercise some caution in estimating causal effects from structural models – the gravity model alone does not guarantee identification, it is only a model for the control counterfactual. Whether this model is valid for causal identification is not just a question of trade theory, it is also a question about the treatment assignment mechanism. In fact, the gravity model imposes a very specific set of assumptions on the control potential outcomes that imply an underlying triple-differences research design. While researchers should continue to use the gravity model in cases where these implied identification assumptions are plausible, this paper cautions against estimating the treatment effect and the gravity model parameters simultaneously in a single regression as is standard in the empirical trade literature. Instead, researchers should consider using the gravity model specification as a model for the potential outcomes under control. Then, after fitting this model to control observations, researchers can impute the counterfactuals for units under treatment. This imputation approach also facilitates straightforward placebo tests to diagnose violations of the identifying assumptions behind the gravity model – a potentially very useful technique that has not yet been widely adopted in studies of international trade. § DISCUSSION Triple-differences designs are growing in popularity among applied researchers, but a formal discussion of their identifying assumptions has been largely absent from the literature until recently with olden2020triple developing theory for identification in the basic 2x2x2 design. However, for researchers working with datasets with staggered adoption and multiple placebo strata, guidance has been, until recently, extremely limited and many empirical papers still rely on the “three-way" fixed effects regression to estimate treatment effects. This paper develops a theoretical framework for understanding the identifying assumptions of triple-differences under staggered adoption and demonstrates via a decomposition in the style of goodman2021difference why the commonly-used regression triple differences estimator does not identify an average of group-time ATTs under unrestricted treatment effect heterogeneity. It highlights how the triple-differences regression can be expressed as an average over 2x2x2 triple-difference terms that contain a “primary" and a “placebo" DiD. Unless treatment effects are constant across all units, time periods and strata, many of these placebo DiDs contain treated observations and will fail to adjust for the bias in the primary DiD even if the triple-differences identifying assumptions hold. Luckily for researchers, alternatives to the triple-differences regression estimator already exist. borusyak2021revisiting note that an estimator based on direct imputation of the counterfactual can be straightforwardly implemented for triple-differences designs using the same three-way fixed effects structure. Fitting the triple-differences regression among only the control observations and imputing the counterfactuals under treatment addresses both of the problems of invalid primary and placebo DiDs shown in the decomposition. Moreover, the decomposition highlights how most of the problems of triple differences regression occur when there exist multiple strata that contain both primary and placebo DiDs. If, as is often the case, a researcher implements a triple-difference by appending a stratum known to never receive treatment to a conventional staggered adoption design, no invalid placebos appear in the regression estimator. There are a number of directions for future work in this area. First, the decomposition focuses on the “static" fixed effects regression, but researchers often also estimate “dynamic" regressions with indicators for treatment leads and lags to allow for some treatment effect heterogeneity over time. As such an extension of the results in sun2021estimating to the triple-differences setting would clarify the extent to which these estimates are contaminated by comparisons that are invalid under unrestricted effect heterogeneity. Second, the present decomposition considers a balanced panel with no missing data while many applied settings – including the trade example in this paper – have substantial amounts of missingness in both treatment and outcome. This affects the weights assigned to each comparison and each ATT in the triple differences regression. Finally, this paper presently only evaluates the direct counterfactual imputation estimator from borusyak2021revisiting. However, other heterogeneity-robust estimators from the differences-in-differences setting, such as the doubly-robust estimator in callaway2021difference which incorporates both a treatment and an outcome model, could also be extended to the “many stratum" triple-differences setting.[With only a single primary and a single placebo stratum, the existing difference-in-differences methods are straightforward to adapt simply by redefining the outcome of interest as the difference between the outcomes in the primary and placebo strata. However, with many placebo strata and differentially staggered treatment, the set of valid placebo comparisons will depend on the particular stratum-specific group-time ATT.] § PROOFS §.§ Proof of Proposition <ref> This proof is a special case of the more general doubly-robust result from callaway2021difference that allows for parallel trends to hold conditional on X. Start by writing the potential outcomes under Consistency/SUTVA (Assumption <ref>). 𝔼[Y_srt - Y_srt^* | G_sr = g] - 𝔼[Y_srt - Y_srt^* | G_sr = g^'] = 𝔼[Y_srt(g) - Y_srt^*(g) | G_sr = g] - 𝔼[Y_srt(g^') - Y_srt^*(g^') | G_sr = g^'] Under no anticipation (Assumption <ref>), since t^* < g, t < g^', and by extension t^* < g^' 𝔼[Y_srt - Y_srt^* | G_sr = g] - 𝔼[Y_srt - Y_srt^* | G_sr = g^'] = 𝔼[Y_srt(g) - Y_srt^*(∞) | G_sr = g] - 𝔼[Y_srt(∞) - Y_srt^*(∞) | G_sr = g^'] Under conditional parallel trends (Assumption <ref>), we have E[Y_srt^*(∞) | G_sr = g] + 𝔼[Y_srt(∞) - Y_srt^*(∞) | G_sr = g^'] = 𝔼[Y_srt(∞) | G_sr = g] Substituting, into the above expression, we have the definition of the ATT_r(g,t) 𝔼[Y_srt - Y_srt^* | G_sr = g] - 𝔼[Y_srt - Y_srt^* | G_sr = g^'] = 𝔼[Y_srt(g) - Y_srt(∞) | G_sr = g] = ATT_r(g,t) §.§ Proof of Proposition <ref> Start with the first difference-in-difference. Under Consistency/SUTVA (Assumption <ref>). 𝔼[Y_srt - Y_srt^* | G_sr = g] - 𝔼[Y_srt - Y_srt^* | G_sr = g^'] = 𝔼[Y_srt(g) - Y_srt^*(g) | G_sr = g] - 𝔼[Y_srt(g^') - Y_srt^*(g^') | G_sr = g^'] Under no anticipation (Assumption <ref>), since t^* < g, t < g^', and by extension t^* < g^' 𝔼[Y_srt - Y_srt^* | G_sr = g] - 𝔼[Y_srt - Y_srt^* | G_sr = g^'] = 𝔼[Y_srt(g) - Y_srt^*(∞) | G_sr = g] - 𝔼[Y_srt(∞) - Y_srt^*(∞) | G_sr = g^'] Under the constant violation of parallel trends assumption (Assumption <ref>) 𝔼[Y_srt^*(∞) | G_sr = g] + 𝔼[Y_srt(∞) - Y_srt^*(∞) | G_sr = g^'] = 𝔼[Y_srt(∞) | G_sr = g] - 𝔼[Y_sr^'t(∞) - Y_sr^'t^*(∞) | G_sr = g] + 𝔼[Y_sr^'t(∞) - Y_sr^'t^*(∞) | G_sr = g^'] Substituting into the first DiD expression, we have the ATT_r(g,t) plus a bias term 𝔼[Y_srt - Y_srt^* | G_sr = g] - 𝔼[Y_srt - Y_srt^* | G_sr = g^'] = 𝔼[Y_srt(g) - Y_srt(∞) | G_sr = g] + 𝔼[Y_sr^'t(∞) - Y_sr^'t^*(∞) | G_sr = g] - 𝔼[Y_sr^'t(∞) - Y_sr^'t^*(∞) | G_sr = g^'] Next, for the second-difference in-difference, under Consistency/SUTVA (Assumption <ref>) along with no anticipation (Assumption <ref>), since G_sr^' > t and by extension, G_sr^' > t^* 𝔼[Y_sr^'t - Y_sr^'t^* | G_sr = g, G_sr^' > t ] - 𝔼[Y_sr^'t - Y_sr^'t^* | G_sr = g^', G_sr^' > t] = 𝔼[Y_sr^'t(∞) - Y_sr^'t^*(∞) | G_sr = g, G_sr^' > t ] - 𝔼[Y_sr^'t(∞) - Y_sr^'t^*(∞) | G_sr = g^', G_sr^' > t] Since we assume constant violation of parallel trends holds for all r, r^', r ≠ r^' 𝔼[Y_sr^'t(∞) - Y_sr^'t^*(∞) | G_sr = g, G_sr^' > t ] - 𝔼[Y_sr^'t(∞) - Y_sr^'t^*(∞) | G_sr = g^', G_sr^' > t] = 𝔼[Y_sr^'t(∞) - Y_sr^'t^*(∞) | G_sr = g] - 𝔼[Y_sr^'t(∞) - Y_sr^'t^*(∞) | G_sr = g^'] Therefore, the second difference-in-difference equals the bias term from the first difference and we have ATT_r(g,t) = {𝔼[Y_srt - Y_srt^* | G_sr = g] - 𝔼[Y_srt - Y_srt^* | G_sr = g^']} - {𝔼[Y_sr^'t - Y_sr^'t^* | G_sr = g, G_sr^' > t ] - 𝔼[Y_sr^'t - Y_sr^'t^* | G_sr = g^', G_sr^' > t]} §.§ Proof of Lemma <ref> By Frisch-Waugh-Lovell, we can write τ̂ as the regression of Y_srt on D̃_srt, the residual from an OLS regression of D_srt on the unit and grouping-time fixed effects. D̃_srt = D_srt - D̂_srt where D̂_srt = α̂̃̂_sr + γ̂̃̂_st + δ̂̃̂_rt and the coefficients are solutions to the OLS minimization problem. α̂̃̂, γ̂̃̂, δ̂̃̂ = _α̃, γ̃, δ̃∑_s=1^S ∑_r=1^R ∑_t=1^T (D_srt - α̃_sr - γ̃_st - δ̃_rt)^2 Denote the means D̅_sr = 1/T∑_t^'=1^T D_srt^' D̅_st = 1/R∑_r^'=1^R D_sr^'t D̅_rt = 1/S∑_s^'=1^S D_s^'rt D̅̅̅_s = 1/RT∑_t^'=1^T ∑_r^'=1^R D_sr^'t^' D̅̅̅_r = 1/ST∑_t^'=1^T ∑_s^'=1^S D_s^'rt^' D̅̅̅_t = 1/SR∑_s^'=1^S ∑_r^'=1^R D_s^'rt^' D̅̅̅̅̅̅̅ = 1/SRT∑_s^'=1^S ∑_r^'=1^R ∑_t^'=1^T D_s^'r^'t^' First-order conditions α̂̃̂_sr = 1/T∑_t=1^T D_srt - 1/T∑_t=1^T γ̂̃̂_st - 1/T∑_t=1^T δ̂̃̂_rt γ̂̃̂_st = 1/R∑_r=1^R D_srt - 1/R∑_r=1^R α̂̃̂_sr - 1/R∑_r=1^R δ̂̃̂_rt δ̂̃̂_rt = 1/S∑_s=1^S D_srt - 1/S∑_s=1^S α̂̃̂_sr - 1/S∑_s=1^S γ̂̃̂_st Writing the predicted value D̂_srt = α̂̃̂_̂̃̂ŝ̃̂r̂̃̂ + γ̂̃̂_̂̃̂ŝ̃̂t̂̃̂ + δ̂̃̂_̂̃̂r̂̃̂t̂̃̂ D̂_srt = 1/T∑_t^'=1^T D_srt^' + 1/R∑_r=1^R D_sr^'t + 1/S∑_s=1^S D_s^'rt - 1/T∑_t^'=1^T γ̂̃̂_st^' - 1/T∑_t^'=1^T δ̂̃̂_rt^' - 1/R∑_r^'=1^R α̂̃̂_sr^' - 1/R∑_r^'=1^R δ̂̃̂_r^'t - 1/S∑_s^'=1^S α̂̃̂_s^'r - 1/S∑_s^'=1^S γ̂̃̂_s^'t Substituting D̂_srt = 1/T∑_t^'=1^T D_srt^' + 1/R∑_r=1^R D_sr^'t + 1/S∑_s=1^S D_s^'rt - 1/RT∑_t^'=1^T ∑_r^'=1^R D_sr^'t^' + 1/RT∑_t^'=1^T ∑_r^'=1^R α̂̃̂_sr^' + 1/RT∑_t^'=1^T ∑_r^'=1^R δ̂_r^'t^' - 1/ST∑_t^'=1^T ∑_s^' = 1^S D_s^'rt^' + 1/ST∑_t^'=1^T ∑_s^'=1^S α̂̃̂_s^'r + 1/ST∑_t^'=1^T ∑_s^'=1^S γ̂̃̂_s^'t^' - 1/SR∑_s^'=1^S ∑_r^'=1^R D_s^' r^' t + 1/SR∑_s^'=1^S ∑_r^'=1^R α̂̃̂_s^'r^' + 1/SR∑_s^'=1^S ∑_r^'=1^R δ̂̃̂_r^'t - 1/R∑_r^'=1^R α̂̃̂_sr^' - 1/R∑_r^'=1^R δ̂̃̂_r^'t - 1/S∑_s^'=1^S α̂̃̂_s^'r Re-arranging + cancelling D̂_srt = 1/T∑_t^'=1^T D_srt^' + 1/R∑_r=1^R D_sr^'t + 1/S∑_s=1^S D_s^'rt - 1/RT∑_t^'=1^T ∑_r^'=1^R D_sr^'t^' - 1/ST∑_t^'=1^T ∑_s^' = 1^S D_s^'rt^' - 1/SR∑_s^'=1^S ∑_r^'=1^R D_s^' r^' t + 1/RT∑_t^'=1^T ∑_r^'=1^R δ̂_r^'t^' + 1/ST∑_t^'=1^T ∑_s^'=1^S γ̂̃̂_s^'t^' + 1/SR∑_s^'=1^S ∑_r^'=1^R α̂̃̂_s^'r^' Substituting again D̂_srt = 1/T∑_t^'=1^T D_srt^' + 1/R∑_r=1^R D_sr^'t + 1/S∑_s=1^S D_s^'rt - 1/RT∑_t^'=1^T ∑_r^'=1^R D_sr^'t^' - 1/ST∑_t^'=1^T ∑_s^' = 1^S D_s^'rt^' - 1/SR∑_s^'=1^S ∑_r^'=1^R D_s^' r^' t + 1/RT∑_t^'=1^T ∑_r^'=1^R δ̂_r^'t^' + 1/ST∑_t^'=1^T ∑_s^'=1^S γ̂̃̂_s^'t^' + 1/SRT∑_s^'=1^S ∑_r^'=1^R ∑_t^' = 1^T D_s^'r^'t^' - 1/SRT∑_s^'=1^S ∑_r^'=1^R ∑_t^' = 1^T γ̂̃̂_s^'t^' - 1/SRT∑_s^'=1^S ∑_r^'=1^R ∑_t^' = 1^T δ̂̃̂_r^'t^' Cancelling again yields the residual D̃_srt in terms of averages of D_srt across dimensions s, r and t. D̃_srt = D_srt - D̅_sr - D̅_st - D̅_rt + D̅̅̅_s + D̅̅̅_r + D̅̅̅_t - D̅̅̅̅̅̅̅ Returning to the expression for τ̂ yields τ̂ = ∑_s=1^S ∑_r=1^R ∑_t=1^T Y_srt(D_srt - D̅_sr - D̅_st - D̅_rt + D̅̅̅_s + D̅̅̅_r + D̅̅̅_t - D̅̅̅̅̅̅̅)/∑_s=1^S ∑_r=1^R ∑_t=1^T (D̃_srt)^2 Swapping indices in the numerator τ̂ = ∑_s=1^S ∑_r=1^R ∑_t=1^T D_srt(Y_srt - Y̅_sr - Y̅_st - Y̅_rt + Y̅̅̅_s + Y̅̅̅_r + Y̅̅̅_t - Y̅̅̅̅̅̅̅)/∑_s=1^S ∑_r=1^R ∑_t=1^T (D̃_srt)^2 With a binary treatment, we can re-write the numerator as a sum over t with D_srt = 1. τ̂ = ∑_s=1^S ∑_r=1^R ∑_t:D_srt = 1 Y_srt - Y̅_sr - Y̅_st - Y̅_rt + Y̅̅̅_s + Y̅̅̅_r + Y̅̅̅_t - Y̅̅̅̅̅̅̅/∑_s=1^S ∑_r=1^R ∑_t=1^T (D̃_srt)^2 §.§ Proof of Theorem <ref> Start from Lemma <ref> τ̂ = ∑_s=1^S ∑_r=1^R ∑_t=1^D D_srt(Y_srt - Y̅_sr - Y̅_st - Y̅_rt + Y̅̅̅_s + Y̅̅̅_r + Y̅̅̅_t - Y̅̅̅̅̅̅̅)/∑_s=1^S ∑_r=1^R ∑_t=1^T (D̃_srt)^2 Define N^(1) N^(1)_sr = ∑_t^'=1^T D_srt^' N^(1)_st = ∑_r^'=1^R D_sr^'t N^(1)_rt = ∑_s^'=1^S D_s^'rt N^(1)_s = ∑_t^'=1^T ∑_r^' = 1^R D_sr^'t^' N^(1)_r = ∑_t^'=1^T ∑_s^' = 1^S D_s^'rt^' N^(1)_t = ∑_s^' = 1^S ∑_r^'=1^R D_s^'r^'t N^(1) = ∑_s^'=1^S ∑_r^' = 1^R ∑_t^'=1^T D_s^'r^'t^' Define N^(0) analogously for the control units: N^(0) = ∑_s^'=1^S ∑_t^'=1^T ∑_r^' = 1^R (1 - D_s^'r^'t^') Expanding the sum and simplifying, we can write the denominator as ∑_s=1^S ∑_r=1^R ∑_t=1^T (D̃_srt)^2 = N^(1) - 1/T∑_s=1^S∑_r=1^R (N_sr^(1))^2 - 1/R∑_s=1^S∑_t=1^T (N_st^(1))^2 - 1/S∑_r=1^R∑_t=1^T (N_rt^(1))^2 + 1/SR∑_t=1^T (N_t^(1))^2 + 1/ST∑_r=1^R (N_r^(1))^2 + 1/RT∑_s=1^S (N_s^(1))^2 -(N^(1))^2/SRT Re-write the numerator in terms of the double sums ∑_r=1^R ∑_s=1^S ∑_t=1^T [ Y_srtD_srt - 1/T∑_t^'=1^T Y_srt^'D_srt - 1/S∑_s^'=1^S Y_s^'rtD_srt + 1/ST∑_s^'=1^S ∑_t^'=1^T Y_s^'rt^'D_srt] - 1/R∑_r^'=1^R [ Y_sr^'tD_srt - 1/T∑_t^'=1^T Y_sr^'t^'D_srt - 1/S∑_s^'=1^S Y_s^'r^'tD_srt + 1/ST∑_s^'=1^S ∑_t^'=1^T Y_s^'r^'t^'D_srt] Rearranging the sums and cancelling 1/SRT∑_r=1^R ∑_s=1^S ∑_t=1^T ∑_s^'=1^S ∑_t^'=1^T ∑_r^'≠ r D_srt[(Y_srt - Y_srt^' - Y_s^'rt + Y_s^'rt^') - (Y_sr^'t - Y_sr^'t^' - Y_s^'r^'t + Y_s^'r^'t^')] Treatment is binary, so ∑ D_srt = ∑ D_srtD_s^'rt + ∑ D_srt(1-D_s^'r t) Next, denote the difference-in-difference Ỹ_srt^(s^'t^') = Y_srt - Y_s^'rt - Y_srt^' + Y_s^'rt^' Note that swapping any one index s, t or r (e.g. s for s^') will flip the sign of the difference-in-differences as well as the triple-difference term Ỹ_srt^(s^'t^') - Ỹ_sr^' t^(s^'t^'). Split the sum using D_srt^', D_s^'rt and D_s^'rt^' 1/SRT∑_r=1^R ∑_s=1^S ∑_t=1^T ∑_s^'=1^S ∑_t^'=1^T ∑_r^'≠ r D_srtD_srt^'D_s^'rt D_s^'rt^'[ Ỹ_srt^(s^'t^') - Ỹ_sr^' t^(s^'t^')] + 1/SRT∑_r=1^R ∑_s=1^S ∑_t=1^T ∑_s^'=1^S ∑_t^'=1^T ∑_r^'≠ r D_srtD_srt^'D_s^'rt(1 - D_s^'rt^')[Ỹ_srt^(s^'t^') - Ỹ_sr^' t^(s^'t^')] + 1/SRT∑_r=1^R ∑_s=1^S ∑_t=1^T ∑_s^'=1^S ∑_t^'=1^T ∑_r^'≠ r D_srtD_srt^'(1 - D_s^'rt)D_s^'rt^'[Ỹ_srt^(s^'t^') - Ỹ_sr^' t^(s^'t^')] + 1/SRT∑_r=1^R ∑_s=1^S ∑_t=1^T ∑_s^'=1^S ∑_t^'=1^T ∑_r^'≠ r D_srt(1 - D_srt^')D_s^'rt D_s^'rt^'[Ỹ_srt^(s^'t^') - Ỹ_sr^' t^(s^'t^')] + 1/SRT∑_r=1^R ∑_s=1^S ∑_t=1^T ∑_s^'=1^S ∑_t^'=1^T ∑_r^'≠ r D_srt(1 - D_srt^')D_s^'rt(1 - D_s^'rt^')[Ỹ_srt^(s^'t^') - Ỹ_sr^' t^(s^'t^')] + 1/SRT∑_r=1^R ∑_s=1^S ∑_t=1^T ∑_s^'=1^S ∑_t^'=1^T ∑_r^'≠ r D_srtD_srt^'(1 - D_s^'rt)(1 - D_s^'rt^')[Ỹ_srt^(s^'t^') - Ỹ_sr^' t^(s^'t^')] + 1/SRT∑_r=1^R ∑_s=1^S ∑_t=1^T ∑_s^'=1^S ∑_t^'=1^T ∑_r^'≠ r D_srt(1 - D_srt^')(1 - D_s^'rt) D_s^'rt^'[Ỹ_srt^(s^'t^') - Ỹ_sr^' t^(s^'t^')] + 1/SRT∑_r=1^R ∑_s=1^S ∑_t=1^T ∑_s^'=1^S ∑_t^'=1^T ∑_r^'≠ r D_srt(1 - D_srt^')(1 - D_s^'rt) (1 - D_s^'rt^')[Ỹ_srt^(s^'t^') - Ỹ_sr^' t^(s^'t^')] The first term can be shown to be zero as all Y_srt and Y_sr^'t will cancel out. Second and fourth terms cancel as we can swap indices s and s^' to get the other. Fifth and sixth terms also cancel as we can swap indices t and t^'. Under staggered adoption, the seventh term is zero as D_srt(1 - D_srt^')(1 - D_s^'rt) D_s^'rt^' is never equal to 1. This leaves the third and the eighth terms. The latter corresponds to the “valid" difference-in-difference (where all comparisons are with control units) while the former is an “invalid" difference-in-difference as the second difference will involve a future period t^' > t. 1/SRT∑_r=1^R ∑_s=1^S ∑_t=1^T ∑_s^'=1^S ∑_t^'=1^T ∑_r^'≠ r D_srt(1 - D_srt^')(1 - D_s^'rt) (1 - D_s^'rt^')[Ỹ_srt^(s^'t^') - Ỹ_sr^' t^(s^'t^')] + 1/SRT∑_r=1^R ∑_s=1^S ∑_t=1^T ∑_s^'=1^S ∑_t^'=1^T ∑_r^'≠ r D_srtD_srt^'(1 - D_s^'rt)D_s^'rt^'[Ỹ_srt^(s^'t^') - Ỹ_sr^' t^(s^'t^')] Splitting again on D_sr^'t, D_s^'r^'t, D_sr^'t^' and D_s^'r^'t^' and suppressing the six sums for space: [D_srt(1 - D_srt^')(1 - D_s^'rt) (1 - D_s^'rt^')][(1-D_sr^'t)(1-D_s^'r^'t)(1-D_sr^'t^')(1-D_s^'r^'t^')][Ỹ_srt^(s^'t^') - Ỹ_sr^' t^(s^'t^')] + [D_srt(1 - D_srt^')(1 - D_s^'rt) (1 - D_s^'rt^')][(D_sr^'t)(1-D_s^'r^'t)(1-D_sr^'t^')(1-D_s^'r^'t^')][Ỹ_srt^(s^'t^') - Ỹ_sr^' t^(s^'t^')] + [D_srt(1 - D_srt^')(1 - D_s^'rt) (1 - D_s^'rt^')][(1-D_sr^'t)(D_s^'r^'t)(1-D_sr^'t^')(1-D_s^'r^'t^')][Ỹ_srt^(s^'t^') - Ỹ_sr^' t^(s^'t^')] + [D_srt(1 - D_srt^')(1 - D_s^'rt) (1 - D_s^'rt^')][(1-D_sr^'t)(1-D_s^'r^'t)(D_sr^'t^')(1-D_s^'r^'t^')][Ỹ_srt^(s^'t^') - Ỹ_sr^' t^(s^'t^')] + [D_srt(1 - D_srt^')(1 - D_s^'rt) (1 - D_s^'rt^')][(1-D_sr^'t)(1-D_s^'r^'t)(1-D_sr^'t^')(D_s^'r^'t^')][Ỹ_srt^(s^'t^') - Ỹ_sr^' t^(s^'t^')] + [D_srt(1 - D_srt^')(1 - D_s^'rt) (1 - D_s^'rt^')][(D_sr^'t)(1-D_s^'r^'t)(D_sr^'t^')(1-D_s^'r^'t^')][Ỹ_srt^(s^'t^') - Ỹ_sr^' t^(s^'t^')] + [D_srt(1 - D_srt^')(1 - D_s^'rt) (1 - D_s^'rt^')][(D_sr^'t)(1-D_s^'r^'t)(D_sr^'t^')(1-D_s^'r^'t^')][Ỹ_srt^(s^'t^') - Ỹ_sr^' t^(s^'t^')] + [D_srt(1 - D_srt^')(1 - D_s^'rt) (1 - D_s^'rt^')][(D_sr^'t)(1-D_s^'r^'t)(1-D_sr^'t^')(D_s^'r^'t^')][Ỹ_srt^(s^'t^') - Ỹ_sr^' t^(s^'t^')] + [D_srt(1 - D_srt^')(1 - D_s^'rt) (1 - D_s^'rt^')][(1-D_sr^'t)(D_s^'r^'t)(D_sr^'t^')(1-D_s^'r^'t^')][Ỹ_srt^(s^'t^') - Ỹ_sr^' t^(s^'t^')] + [D_srt(1 - D_srt^')(1 - D_s^'rt) (1 - D_s^'rt^')][(1-D_sr^'t)(D_s^'r^'t)(1-D_sr^'t^')(D_s^'r^'t^')][Ỹ_srt^(s^'t^') - Ỹ_sr^' t^(s^'t^')] + [D_srt(1 - D_srt^')(1 - D_s^'rt) (1 - D_s^'rt^')][(1-D_sr^'t)(1-D_s^'r^'t)(D_sr^'t^')(D_s^'r^'t^')][Ỹ_srt^(s^'t^') - Ỹ_sr^' t^(s^'t^')] + [D_srt(1 - D_srt^')(1 - D_s^'rt) (1 - D_s^'rt^')][(1-D_sr^'t)(D_s^'r^'t)(D_sr^'t^')(D_s^'r^'t^')][Ỹ_srt^(s^'t^') - Ỹ_sr^' t^(s^'t^')] + [D_srt(1 - D_srt^')(1 - D_s^'rt) (1 - D_s^'rt^')][(D_sr^'t)(1-D_s^'r^'t)(D_sr^'t^')(D_s^'r^'t^')][Ỹ_srt^(s^'t^') - Ỹ_sr^' t^(s^'t^')] + [D_srt(1 - D_srt^')(1 - D_s^'rt) (1 - D_s^'rt^')][(D_sr^'t)(D_s^'r^'t)(1-D_sr^'t^')(D_s^'r^'t^')][Ỹ_srt^(s^'t^') - Ỹ_sr^' t^(s^'t^')] + [D_srt(1 - D_srt^')(1 - D_s^'rt) (1 - D_s^'rt^')][(D_sr^'t)(D_s^'r^'t)(D_sr^'t^')(1-D_s^'r^'t^')][Ỹ_srt^(s^'t^') - Ỹ_sr^' t^(s^'t^')] + [D_srt(1 - D_srt^')(1 - D_s^'rt) (1 - D_s^'rt^')][(D_sr^'t)(D_s^'r^'t)(D_sr^'t^')(D_s^'r^'t^')][Ỹ_srt^(s^'t^') - Ỹ_sr^' t^(s^'t^')] + Row 2 cancels with itself as does row 5. Row 4 is ruled out by staggered adoption. Similar argument for rows 12-14. Staggered adoption eliminates rows 8 and 9 [D_srt(1 - D_srt^')(1 - D_s^'rt) (1 - D_s^'rt^')][(1-D_sr^'t)(1-D_s^'r^'t)(1-D_sr^'t^')(1-D_s^'r^'t^')][Ỹ_srt^(s^'t^') - Ỹ_sr^' t^(s^'t^')] + [D_srt(1 - D_srt^')(1 - D_s^'rt) (1 - D_s^'rt^')][(1-D_sr^'t)(D_s^'r^'t)(1-D_sr^'t^')(1-D_s^'r^'t^')][Ỹ_srt^(s^'t^') - Ỹ_sr^' t^(s^'t^')] + [D_srt(1 - D_srt^')(1 - D_s^'rt) (1 - D_s^'rt^')][(D_sr^'t)(1-D_s^'r^'t)(D_sr^'t^')(1-D_s^'r^'t^')][Ỹ_srt^(s^'t^') - Ỹ_sr^' t^(s^'t^')] + [D_srt(1 - D_srt^')(1 - D_s^'rt) (1 - D_s^'rt^')][(D_sr^'t)(1-D_s^'r^'t)(D_sr^'t^')(1-D_s^'r^'t^')][Ỹ_srt^(s^'t^') - Ỹ_sr^' t^(s^'t^')] + [D_srt(1 - D_srt^')(1 - D_s^'rt) (1 - D_s^'rt^')][(1-D_sr^'t)(D_s^'r^'t)(1-D_sr^'t^')(D_s^'r^'t^')][Ỹ_srt^(s^'t^') - Ỹ_sr^' t^(s^'t^')] + [D_srt(1 - D_srt^')(1 - D_s^'rt) (1 - D_s^'rt^')][(1-D_sr^'t)(1-D_s^'r^'t)(D_sr^'t^')(D_s^'r^'t^')][Ỹ_srt^(s^'t^') - Ỹ_sr^' t^(s^'t^')] + [D_srt(1 - D_srt^')(1 - D_s^'rt) (1 - D_s^'rt^')][(D_sr^'t)(D_s^'r^'t)(D_sr^'t^')(1-D_s^'r^'t^')][Ỹ_srt^(s^'t^') - Ỹ_sr^' t^(s^'t^')] + [D_srt(1 - D_srt^')(1 - D_s^'rt) (1 - D_s^'rt^')][(D_sr^'t)(D_s^'r^'t)(D_sr^'t^')(D_s^'r^'t^')][Ỹ_srt^(s^'t^') - Ỹ_sr^' t^(s^'t^')] + Applying the same to the terms with D_srtD_srt^'(1 - D_s^'rt)D_s^'rt^' and collecting some terms yields an expression for the numerator. Define D_srt^(0)≡ 1-D_srt. 1/SRT∑_r=1^R ∑_s=1^S ∑_t=1^T ∑_s^'=1^S ∑_t^'=1^T ∑_r^'≠ r[Ỹ_srt^(s^'t^') - Ỹ_sr^' t^(s^'t^')] ×[D_srtD_s^'rt^(0)D_srt^'^(0)D_s^'rt^'^(0) + D_srtD_s^'rt^(0)D_srt^'D_s^'rt^'] × [D_sr^'t^(0)D_s^'r^'t^(0)D_sr^'t^'^(0)D_s^'r^'t^'^(0) + D_sr^'t^(0)D_s^'r^'t^(0)D_sr^'t^'D_s^'r^'t^' + D_sr^'t^(0)D_s^'r^'tD_sr^'t^'^(0)D_s^'r^'t^' + D_sr^'tD_s^'r^'tD_sr^'t^'D_s^'r^'t^' + D_sr^'tD_s^'r^'tD_sr^'t^'^(0)D_s^'r^'t^'^(0) + D_sr^'tD_s^'r^'t^(0)D_sr^'t^'D_s^'r^'t^'^(0)] + [ Ỹ_srt^(s^'t^') - Ỹ_sr^' t^(s^'t^')]×[D_srtD_s^'rt^(0)D_srt^'^(0)D_s^'rt^'^(0)] ×[ D_sr^'t^(0)D_s^'r^'tD_sr^'t^'^(0)D_s^'r^'t^'^(0) + D_sr^'tD_s^'r^'tD_sr^'t^'D_s^'r^'t^'^(0)] + [Ỹ_srt^(s^'t^') - Ỹ_sr^' t^(s^'t^')]×[D_srtD_s^'rt^(0)D_srt^'D_s^'rt^'] ×[ D_sr^'t^(0)D_s^'r^'tD_sr^'t^'D_s^'r^'t^' + D_sr^'t^(0)D_s^'r^'t^(0)D_sr^'t^'D_s^'r^'t^'^(0)] Multiplying numerator and denominator by SRT yields the final expression for τ. It is helpful to consider an alternate decomposition in which some of the triple-differences terms can be re-written as differences-in-differences as the same difference-in-difference comparison acts as both a “primary" and a “placebo" [D_srt(1 - D_srt^')(1 - D_s^'rt) (1 - D_s^'rt^')][(1-D_sr^'t)(D_s^'r^'t)(1-D_sr^'t^')(1-D_s^'r^'t^')][(Y_srt - Y_s^'rt - Y_srt^' + Y_s^'rt^')] + [D_srt(1 - D_srt^')(1 - D_s^'rt) (1 - D_s^'rt^')][(1-D_sr^'t)(D_s^'r^'t)(1-D_sr^'t^')(1-D_s^'r^'t^')][-(Y_sr^'t - Y_s^'r^'t - Y_sr^'t^' + Y_s^'r^'t^')] Swapping indices s and r in the second expression yields [D_srt(1 - D_srt^')(1 - D_s^'rt) (1 - D_s^'rt^')][(1-D_sr^'t)(D_s^'r^'t)(1-D_sr^'t^')(1-D_s^'r^'t^')][(Y_srt - Y_s^'rt - Y_srt^' + Y_s^'rt^')] + [(1-D_sr^'t)(D_s^'r^'t)(1-D_sr^'t^')(1-D_s^'r^'t^')][D_srt(1 - D_srt^')(1 - D_s^'rt) (1 - D_s^'rt^')][-(Y_s^'rt - Y_srt - Y_s^'rt^' + Y_srt^')] = 2 [D_srt(1 - D_srt^')(1 - D_s^'rt) (1 - D_s^'rt^')][(1-D_sr^'t)(D_s^'r^'t)(1-D_sr^'t^')(1-D_s^'r^'t^')][(Y_srt - Y_s^'rt - Y_srt^' + Y_s^'rt^')] Applying the same to all four of these triple difference terms gives an expression for the numerator in terms of a sum over both 2x2x2 and 2x2 terms. 1/SRT∑_r=1^R ∑_s=1^S ∑_t=1^T ∑_s^'=1^S ∑_t^'=1^T ∑_r^'≠ r[DiDiD_srt^(s^' r^' t^')] ×[D_srtD_s^'rt^(0)D_srt^'^(0)D_s^'rt^'^(0) + D_srtD_s^'rt^(0)D_srt^'D_s^'rt^'] × [D_sr^'t^(0)D_s^'r^'t^(0)D_sr^'t^'^(0)D_s^'r^'t^'^(0) + D_sr^'t^(0)D_s^'r^'t^(0)D_sr^'t^'D_s^'r^'t^' + D_sr^'t^(0)D_s^'r^'tD_sr^'t^'^(0)D_s^'r^'t^' + D_sr^'tD_s^'r^'tD_sr^'t^'D_s^'r^'t^' + D_sr^'tD_s^'r^'tD_sr^'t^'^(0)D_s^'r^'t^'^(0) + D_sr^'tD_s^'r^'t^(0)D_sr^'t^'D_s^'r^'t^'^(0)] + 2 ×[ Ỹ_srt^(s^'t^')]×[D_srtD_s^'rt^(0)D_srt^'^(0)D_s^'rt^'^(0)] ×[ D_sr^'t^(0)D_s^'r^'tD_sr^'t^'^(0)D_s^'r^'t^'^(0) + D_sr^'tD_s^'r^'tD_sr^'t^'D_s^'r^'t^'^(0)] + 2 ×[ Ỹ_srt^(s^'t^')]×[D_srtD_s^'rt^(0)D_srt^'D_s^'rt^'] ×[ D_sr^'t^(0)D_s^'r^'tD_sr^'t^'D_s^'r^'t^' + D_sr^'t^(0)D_s^'r^'t^(0)D_sr^'t^'D_s^'r^'t^'^(0)] § TWO-WAY CLUSTERING This section provides alternative versions of the replication results from Section <ref> for the effects of the WTO/GATT on trade to address critiques from aronow2015cluster and carlson2021dyadic regarding the improper use of clustered standard errors in studies involving dyads. Most papers in this literature cluster standard errors on the dyad. However, aronow2015cluster notes that this likely underestimates the degree of uncertainty in parameter estimates. To implement this via a bootstrap approach that is compatible with the imputation estimator recommended in the main text, I use the “pigeonhole" Bayesian bootstrap outlined in owen2012bootstrapping. For each bootstrap iteration, I sample a weight for each sender and each receiver independently from an Exponential(1) distribution. Then, I assign a weight to each dyad using the product of the sampled sender and receiver weights, fit a weighted fixed effects regression, and generate the imputed treatment effect estimates. I estimate the standard error using the standard deviation of these bootstrapped estimates and construct 95% normal confidence intervals. In general, the results suggest markedly greater uncertainty in effect estimates than acknowledged in the existing literature. Nevertheless, I still find some evidence for failed pre-treatment placebo tests. Moreover, I find considerably large uncertainty in estimates for the effect of WTO/GATT participation, which is likely due to the scarcity of control units on which to fit the imputation model.
http://arxiv.org/abs/2307.02267v1
20230705130525
Probing neutrino production in blazars by millimeter VLBI
[ "Y. Y. Kovalev", "A. V. Plavin", "A. B. Pushkarev", "S. V. Troitsky" ]
astro-ph.HE
[ "astro-ph.HE", "astro-ph.GA" ]
§ INTRODUCTION: CURRENT STATUS OF HIGH-ENERGY NEUTRINO STUDIES, BLAZARS — NEUTRINO CONNECTION Neutrino observatories, such as IceCube, ANTARES (Astronomy with a Neutrino Telescope and Abyss environmental RESearch project), and Baikal-GVD (Gigaton Volume Detector) have been convincingly detecting astrophysical neutrinos at TeV to PeV energies <cit.>. Despite these observations, little was known about the origin of energetic astrophysical neutrinos until recently. Blazars, a class of active galactic nuclei (AGN), have been considered as potential neutrino sources since the very early days of multi-messenger astronomy <cit.>. Observational evidence for a blazar-neutrino connection started to emerge in recent years. First, the blazar TXS 0506+056 was associated with a high-energy neutrino, which coincided with a gamma-ray flare in 2017 <cit.>. This association was in contrast with a lack of systematic connection between gamma-ray-loud blazars and neutrinos (see, e.g. <cit.>). Then, numerous radio-bright blazars were shown to emit neutrinos with energies from TeVs to PeVs <cit.>. The detection of this correlation is driven by the unique capabilities of very-long-baseline interferometry (VLBI): this is the only technique able to directly probe and resolve central (sub)parsecs in AGNs at cosmological distances. Blazars emit neutrinos preferentially at the times of their flares (<ref>) visible in radio bands <cit.>. Still, the neutrino production mechanism and the physical regions where it occurs remain unclear. The observed connection of neutrinos with radio emission from compact jet regions emphasizes the importance of high-resolution studies in answering these questions. VLBI is the best direct visual evidence we can obtain in astronomy. For a general discussion of multi-wavelength and multi-messenger studies with the ngEHT, see <cit.>. In this paper, we present the progress in the multi-messenger astronomy studies of cosmic neutrinos, their probable association with blazars, challenges and a critical role to be played by ngEHT <cit.> in addressing exciting open questions of high-energy neutrino production. § NEUTRINO PRODUCTION IN BLAZARS: OPEN QUESTIONS Assuming no particle physics beyond the Standard Model, astrophysical neutrinos with energies above TeV can be produced only in interactions of relativistic hadrons — protons or nuclei — with ambient matter or radiation, see, e.g. <cit.> for a recent review. This fits well the observational evidence discussed in <ref> because the nonthermal radiation of blazars gives a clear signal that particles are accelerated there. However, both the amount of relativistic hadrons in AGN, and the degree to which these hadrons contribute to the observed electromagnetic radiation, are uncertain. Population studies suggest <cit.> that their contribution is small, and neutrino luminosities of blazars are orders of magnitude lower than photon luminosities. Consequently, one may imagine neutrino production in various places in a blazar and by means of different mechanisms. The main challenge is to explain the production of neutrinos of very different energies, from a few TeV <cit.> to sub-PeV <cit.>, in sources of the same class. For the pγ mechanism, expected to dominate in blazars <cit.>, the wide neutrino energy range requires the presence of target photons with a very broad distribution of energies. Conventional models of high-energy neutrino production in AGN, known for decades, e.g. <cit.>, as well as their modern versions, e.g. <cit.>, often experience problems in explaining the lower-energy part of the observed neutrino flux, in particular because the target photons from the accretion disk are expected to have energies ∼ (10 … 100) eV, while ∼ 10 keV are required for intense production of ∼ 10-TeV neutrinos. While neutrinos have already been associated with VLBI-bright blazars <cit.> and with their radio flares <cit.>, these results were based on observations at centimeter wavelengths. There, synchrotron self-absorption prevents one from detailed spatio-temporal studies of the AGN central sub-parsec parts <cit.>. To summarize, the open questions of the blazar-neutrino astrophysics are the following: (i) how are protons accelerated, (ii) what is the neutrino production process, pγ or pp, (iii) from where do seed (X-ray) photons originate in case of pγ, (iv) where are neutrinos produced? Note that (ii) and (iv) can be different, and multizone models may be required to explain all observations consistently. § NEUTRINO ASTRONOMY IN THE NGEHT ERA Currently, studies of high-energy astrophysical neutrinos and their sources are limited by the sensitivity and resolution of neutrino observatories. The situation is rapidly changing, as their capabilities are increasingly improving. The next-generation IceCube-Gen2 will grow the telescope volume tenfold, from 1 to 10 cubic kilometers, aiming at a corresponding increase in detection rates by 2033 <cit.>. The Baikal-GVD detector has already reached the effective volume of 0.5 cubic kilometer and continues to grow and improve event reconstruction algorithms <cit.>. KM3Net (Cubic Kilometre Neutrino Telescope), a neutrino observatory in the Mediterranean, is being constructed and has already started yielding its first results <cit.>. Together, these instruments will provide a qualitative leap in both the number of detected astrophysical neutrinos and in the precision of their localization. An increasing number of well-localized neutrinos will lead to a reliable identification of individual blazars as neutrino sources. Moreover, it should be possible to highlight specific time periods with more prominent neutrino emission. This brings new challenges and opportunities to the EM counterpart of such multi-messenger studies. The planned ngEHT array <cit.> will provide superior angular resolution, dynamic range and sensitivity in Stokes I and polarization at 3, 1.3, and 0.7 mm. This will allow scientists to observe and monitor the central (sub)parsecs of neutrino-emitting blazars at the highest resolution and frequency possible, significantly alleviating the synchrotron opacity problem of the current centimeter-wavelength VLBI. The ngEHT will be able to probe both the accretion disk region and the parsec-scale jet base, opening new opportunities for connecting the two regions and unraveling the proton acceleration and neutrino production in blazars. § PLANNING NGEHT EXPERIMENT Below, we discuss several approaches to study and understand the physics behind the connection between neutrino production and EM activity from the jet upstream to the vicinity of the central engine — a possibility which will be realized by ngEHT. Before elaborating on observing campaigns, we note the following important complications of neutrino astrophysics that affect suggested scenarios below. A typical probability of a neutrino with an energy above 100 TeV to be of an astrophysical origin is around 50%, and it drops significantly for lower energies <cit.>. A typical 90% error region of a highly probable high-energy neutrino is several square degrees <cit.>. Some neutrinos might arrive from nearby non-jetted AGNs <cit.> or even from our Galaxy and its relativistic objects <cit.>. On top of that, we know very little about mechanisms of neutrino production in blazars — so there is no streetlight under which we can plan our search. We expect that a variety of blazars would be associated with neutrinos and allow us to select optimal ngEHT targets taking both physical properties and technical or observational limitations into account. Within the current understanding and the experience accumulated from observational searches for high-energy neutrino counterparts, the following three scenarios for monitoring observations are suggested. Scenario 1: Observations of blazars associated with selected new high-energy neutrino alerts immediately after neutrino arrival. Up to several blazar-associated high-energy alerts per year are expected. When two or three neutrino telescopes become fully operational, one might conservatively require two alerts for a given target to arrive within several days. Pros: most efficient since linked to a specific event. Cons: will only be able to probe the state of an associated object after neutrino arrival. Scenario 2: Observations of a sample of selected blazars reliably identified previously as neutrino sources. See Table <ref> with most probable neutrino candidates to date. Pros: optimal in terms of observed sample and complete temporal coverage of events. Cons: so far, a very limited number of cases is known with repeated neutrino detection from the same source (Table <ref>, column 5), but their list can grow. Scenario 3: Observations of a complete VLBI-flux-density limited sample of 50-100 brightest blazars with 3 mm VLBI flux density above 1 Jy <cit.>. Pros: full temporal coverage of expected events, possibility to compare neutrino-emitting and neutrino-non-emitting blazars calculating robust significance of a coincidence <cit.>; an option to combine such observations with other ngEHT science cases <cit.>. Cons: observationally expensive. Tracing changes in the compact structure of blazars during and around periods of increased neutrino emission requires multi-epoch monitoring at a high resolution provided by ngEHT. To roughly estimate required observing time, we expect that one imaging epoch per target takes 4-8 hours. The observations should happen with a cadence between two weeks and one month (an estimate based on experience gained by the 7 mm blazar VLBA monitoring program <cit.>) and produce polarization images with Stokes I dynamic range about or better than 1000:1, preferably multi-frequency with a possibility for Faraday rotation measure (RM) and spectral analysis. From this, we will be able to constrain the following source properties. * Jet kinematics measurements will allow us to better estimate Doppler boosting and jet viewing angle following <cit.>, constrain plasma acceleration <cit.>. Jet geometry profile studies will constrain jet formation and collimation <cit.>. * Jet kinematics will also deliver information about newborn jet features <cit.>, measure ejection epochs of features possibly associated with neutrino events, compare those with neutrino arrival time and locate the neutrino production zone from the measured delay. Compare with similar analysis for VLBI-γ-ray studies <cit.>. * Faraday RM, reconstructed EVPAs and analysis of radio spectrum together with core-shift measurements will deliver information on the magnetic field structure, its strength and its changes <cit.>, which might be related to physical conditions required for neutrino production. * Monitoring overall changes in the millimeter parsec- and sub-parsec scale structure of blazars at extreme resolution of ngEHT will allow us to distinguish between flares in disks and in jets <cit.> related to neutrino production if resolution, sensitivity, and opacity permit. Observing in this regime, we will be able to overcome significant delays related to synchrotron self-absorption at lower radio frequencies (see <ref> as well as <cit.>). We underline that studying a complete sample of AGN with understandable properties will allow us not only to relate the observed changes to detected neutrinos but also set a robust significance on that association, following the approach suggested by <cit.>. § SYNERGY WITH OTHER FACILITIES The Square Kilometer Array <cit.> and especially the next generation Very Large Array <cit.> going as high as 100 GHz will allow for monitoring much larger samples of VLBI-selected AGN as well as fast imaging of fields of neutrino arrival, pre-selecting most probable neutrino candidates for ngEHT targeted studies. Wide-field telescopes like the optical Legacy Survey of Space and Time <cit.> will allow scientists to much better associate blazars with neutrinos in cases if flaring activity is confirmed as a valid indicator <cit.>. Moreover, optical and UV telescopes can separate flares happening in jets and accretion disks, analyzing optical color and polarization. Seed photons are expected from X-rays <cit.>, which is where current and new generation space X-ray telescopes will be very helpful. High energies <cit.> will continue supporting the gamma-ray/TeV – neutrino analysis and allow checking if neutrino production zone is actually opaque to gamma-rays. § SUMMARY The ngEHT will revolutionize VLBI imaging capabilities by bringing together the power of surpassing resolution, advanced dynamic range, and sensitive polarization data. What makes it unique, however, is its remarkable immunity to synchrotron absorption. It will allow probing regions all the way from the accretion disk to the parsec-scale jet <cit.> and study the most probable sources of high-energy neutrinos — blazars. By the time of ngEHT operations, three large high-energy neutrino telescopes will be fully functional: IceCube, KM3NeT, and Baikal-GVD. This paper formulates the science case, presents eight most probable associations to date, and suggests observational strategies to address very exciting and wide open questions of proton acceleration and neutrino production. This publication is funded in part by the Gordon and Betty Moore Foundation. It was also made possible through the support of a grant from the John Templeton Foundation. The opinions expressed in this publication are those of the author(s) and do not necessarily reflect the views of these Foundations. We thank the ngEHT team for discussions, Eduardo Ros as well as anonymous referees for useful comments on the manuscript, and Elena Bazanova for language editing. The VLBA is an instrument of the National Radio Astronomy Observatory. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated by Associated Universities, Inc. The authors declare no conflict of interest. = = = = -0cm References
http://arxiv.org/abs/2307.00868v1
20230703090847
MADS: Modulated Auto-Decoding SIREN for time series imputation
[ "Tom Bamford", "Elizabeth Fons", "Yousef El-Laham", "Svitlana Vyetrenko" ]
stat.ML
[ "stat.ML", "cs.LG" ]
Notes on weight-shifting operators and unifying relations Qi Chen,^a Yi-Xiao Tao^b Received xxxx; accepted xxxx ========================================================= Time series imputation remains a significant challenge across many fields due to the potentially significant variability in the type of data being modelled. Whilst traditional imputation methods often impose strong assumptions on the underlying data generation process, limiting their applicability, researchers have recently begun to investigate the potential of deep learning for this task, inspired by the strong performance shown by these models in both classification and regression problems across a range of applications. In this work we propose MADS, a novel auto-decoding framework for time series imputation, built upon implicit neural representations. Our method leverages the capabilities of SIRENs for high fidelity reconstruction of signals and irregular data, and combines it with a hypernetwork architecture which allows us to generalise by learning a prior over the space of time series. We evaluate our model on two real-world datasets, and show that it outperforms state-of-the-art methods for time series imputation. On the human activity dataset, it improves imputation performance by at least 40%, while on the air quality dataset it is shown to be competitive across all metrics. When evaluated on synthetic data, our model results in the best average rank across different dataset configurations over all baselines. § INTRODUCTION Time series imputation, and related tasks such as forecasting and generation, remain of significant interest in fields as diverse as finance, climate modelling and healthcare. Traditional approaches, such as averaging and regression, are typically over-simplistic and fail to adequately capture the underlying behaviour. The development of more modern methods, including iterative imputation and maximum likelihood routines, have increased the sophistication of the algorithms and improved performance, but the underlying assumptions often required by such approaches leads to implicit biases that can be detrimental in more complex cases (e.g. see <cit.> or <cit.>). Implicit Neural Representations (INRs) have recently shown to have state-of-the-art performance in a variety of tasks, including shape representation, encoding of object appearance, part-level semantic segmentation and kernel representation (e.g. <cit.>, <cit.>, <cit.>, <cit.>). In particular, novel INR modifications such as periodic activations in SIREN (<cit.>) and positional encodings in NeRFs (<cit.>) are able to overcome the spectral bias that traditional neural networks tend to suffer. Additionally, the grid-free learning approach compatible with INRs allow them to work well even for irregularly sampled or missing data. In this work, we propose a novel method for multivariate time series imputation via our Modulated Auto-Decoding SIREN (MADS). MADS utilises the capabilities of SIRENs for high fidelity reconstruction of signals and irregular data handling. Our SIREN parameterizations are combined with hypernetworks in order to learn a prior over the space of time series. Our contributions are summarised as follows: * We present Modulated Auto-Decoding SIREN (MADS) for time series imputation. MADS utilises a novel combination of amplitude modulation and SIREN weight prediction to allow notably robust performance across data regimes. * We propose a unique `fixed' formulation for amplitude modulation to aid generalisation across the dataset. * We evaluate our model on two real-world datasets, Air Quality and Human Activity Recognition and on multiple toy datasets that provide extensive coverage of different data regimes. Experimental results show that our model outperforms state-of-the-art models for time series imputation. § RELATED WORK Implicit Neural Representations Implicit Neural Representations (INRs), also referred as neural fields, allow a continuous representation of multidimensional data by encoding a functional relationship between the input coordinates and their corresponding signal value. One of the main advantages of this representation is that the signal is encoded in a grid-free representation, providing an intrinsic non-linear interpolation of the data. One of the first applications of INRs was presented in DeepSDF for shape representation by a continuous volumetric field <cit.>. DeepSDF is capable of representing an entire class of shapes through the use of an auto-decoder, and showed that a major advantage of the method is that inference can be performed with an arbitrary number of samples. This is particularly relevant to our case of time series imputation where the time series can be irregularly sampled and the number of samples can vary. More recently, INRs have gained popularity in visual computing (<cit.>, <cit.>) due to the key development of periodic activations <cit.> and positional encodings <cit.>, which allow them to learn high-frequency details within the data. Nevertheless, these methods can be computationally expensive or can have limited generalization capability as the complexity of the data increases (even with the use of hypernetworks). <cit.> attempted to leverage periodic activations (SIREN), and their ability to reconstruct high frequency signals, whilst retaining generalisation capabilities, through the addition of a modulation network in the model architecture. The modulator is an additional MLP leveraged for generalisation, which consists of an identical internal structure to the SIREN (excepting the choice of activation function), such that each node output of the modulator can be matched up to a node in the SIREN, and element-wise multiplication carried out. Whilst there has been extensive uses of INRs in a wide variety of data sources such as video, images and audio, representation of 3D scene data, such as 3D geometry (<cit.>, <cit.>, <cit.>) and object appearance (<cit.>, <cit.>), few works have leveraged them for time series representation, with <cit.> using INRs for anomaly detection and <cit.> proposing INRs in combination with meta-learning for time series forecasting. Closest to our work, HyperTime <cit.> implements a similar hypernetwork+SIREN architecture, but leverages it for time series generation. Time series imputation Early time series imputation methods, which rely on basic statistical approaches, aim to leverage both the local continuity of the time dimension and the correlations among various channels. For example, the SimpleMean/SimpleMedian method <cit.>, replaces missing values by taking the mean or median. The KNN (k-nearest neighbors) method <cit.>, uses cross-channel data to fill gaps with the help of k-nearest neighbours. In addition to the straightforward and standard interpolation techniques that utilise polynomial curve fitting, prevalent strategies focus on using established forecasting methods and drawing on the similarities between various time series to replace missing data points. For example, some approaches rely on k-nearest neighbours (<cit.>, <cit.>), the expectation-maximization algorithm (<cit.>, <cit.>) or linear predictors and state-space models (<cit.>, <cit.>). Previous work has shown that deep learning models are able to capture the temporal dependency of time series, giving more accurate imputation than statistical methods. Common approaches use RNNs for sequence modelling (<cit.>, <cit.>, <cit.>, <cit.>). Subsequent studies combined RNNs with other methods in order to improve imputation performance, such as GANs and self-training. In particular, the combination of RNNs with attention mechanisms have been successful for imputation and interpolation of time series. Whilst most time series imputation methods have focused on deterministic imputation, recently probabilistic methods have been developed, such as GP-VAE <cit.> and CSDI <cit.>. § METHOD §.§ Problem Formulation In this work we aim to solve the problem of in-filling missing data via our proposed MADS model. We represent a general multivariate time series by a matrix of values 𝐗 = (𝐱_0, ..., 𝐱_N)^T∈ℛ^N,D sampled at timesteps 𝐓 = (t_0, ..., t_N), where D is the dimensionality of the series. Each column of this matrix therefore represents an individual time series corresponding to a feature of the original. Note that in general the timesteps are not regularly spaced. We can then define a corresponding mask matrix M∈{0×1}^N,D, in which an element M_i,j=0 if the corresponding element in 𝐗 is missing. In general there may be no pattern to these missing values, and all feature values at a given timestep could be missing. §.§ MADS The architecture utilised by our model is shown in Fig. <ref>. It consists of three networks: the foundational SIREN, a hypernet (<cit.>), and a modulator. The hypernet takes in a latent code corresponding to a given time series, and outputs a set of network weights. These weights map to those inside the SIREN, and thus a distinct INR is instantiated for each individual time series from which imputation is carried out. Rather than utilise an encoding network to calculate the latent vectors, MADS follows the auto-decoding setup of DeepSDF (<cit.>), which was found to be similarly capable without the additional network overhead. In this setup, the latent values are treated as variables during training (and so backpropagated), and then optimised again during inference for a given time series, when all other network variables are fixed. The third network is used for amplitude modulation, which is similar to that proposed in <cit.>. Such a network was shown to improve generalisation performance significantly, relative to traditional auto-decoding or hypernetwork-based SIREN models. For a given sample, the modulator varies sine activation amplitudes within SIREN, allowing certain frequency modes to be nullified. Whilst the original model was applied to image-based data, and tasks such as up-sampling and image relighting, the applicability to time series and imputation in particular is clear. In this work, a modulation network is leveraged alongside the hypernet. For a given time series this allows both SIREN weights and activation amplitudes to be varied, with the modulation network applying the weights through an element-wise mapping to the SIREN. Defining W and 𝐛 as SIREN network weights and biases respectively, the output of the i'th hidden SIREN layer h_i = sin(𝐖_i h_i-1+𝐛_i) , becomes h_i = α_i⊙sin(𝐖_i h_i-1+𝐛_i) , such that the Hadamard product is applied between modulator hidden layer outputs α_i and the periodic activation function of SIREN. The modulation network is therefore constrained to having the same structure as the SIREN. We investigate two formulations of MADS. The first, `base', formulation follows <cit.>; the modulator takes in the same latent representation as that input into the hypernet, so that the modulated amplitudes vary with each individual time series. Since the modulator is introduced to limit overfitting, this can lead to amplitude variations for each time series that are too unconstrained. We therefore propose a more robust formulation that learns the frequency modes of the entire dataset rather than individual time series. In this setup, a distinct latent space is input into the modulator. As before, the latent variable values are learnt during training, but in this case no optimisation is carried out before inference, such that the latent values are shared across the full dataset. This is the `fixed' formulation. To summarise, MADS constitutes the following features: * SIREN - functional representation of a given time series, taking an input timepoint and outputting corresponding amplitudes * Hypernet - takes in a latent vector representation of a time series and outputs a set of network weights used to instantiate the SIREN * Modulator - takes in a latent vector representation of a time series/dataset and modulates the sine activation amplitudes of SIREN through element-wise multiplication * Auto-decoding structure - treat latent variables as trainable parameters, and re-optimise during inference of a particular time series § EXPERIMENTS In this section we present the experiments used for model evaluation. We give a summary of the datasets used (section <ref>), briefly review the benchmarking models, and describe the metrics used for comparison (section <ref>). We also give the implementation details for MADS. We then go on to present our results and discuss their relevance (sections <ref> and <ref>). §.§ Datasets §.§.§ Real World Datasets We evaluate our model on two real world, multivariate datasets. The first one is the Human Activity dataset (HAR), which consists of three-dimensional spatial data collected from 5 people whilst carrying out a range of activities (sitting, walking, standing etc.). Four sensors were attached to each person - on their chest, belt and both ankles - giving 12 features in total. We follow the pre-processing scheme outlined in <cit.> and <cit.>, giving 6554 time series. Following previous works <cit.>, <cit.>, a specified fraction of known values are selected randomly for removal, acting as ground truths for imputation. The missing data are selected randomly from each time series and the missing data fraction is user-specified; we evaluate two regimes of data with low/high missing data fractions, set here to 0.3 and 0.7 respectively. We use the same randomly chosen missing timesteps for imputation across all features and time series. In addition, the dataset is split randomly into a train/test pair, with 20% of the time series being designated to the test set. Note that each time series consists of 50 timesteps after pre-processing. The second real world dataset we use is the Air Quality dataset, which consists of PM2.5 measurements taken from 36 measuring stations in Beijing, over a period of 12 months. The measurements were collected every hour. We follow the pre-processing steps of the data into a train-test split carried out in <cit.>. This is done for easy comparison with other works, which typically use the same split. In total, the dataset contains 158/80 samples for training/testing, with 36 features and the same number of timesteps. For both datasets, we run each experiment three times. §.§.§ Toy Datasets We construct the baseline sinusoidal toy time series (univariate) through the following expression: y_n = e^-γ (t_n + 1)sin(Ω t_n + ϕ) + ϵ , where ϕ∼𝒩(0,1) , Ω∼ω×Beta(2,2) , ϵ∼ 0.2 ×𝒩(0,1), and t_n ∈{-1, …, 1}. This is similar to the synthetic dataset used in <cit.> when the decay factor γ is set to zero. The exponential factor driven by γ is included to model non-stationarity in the time series, whilst the beta distribution is scaled up by a constant multiplicative factor, ω, to modify the frequency of modes within the dataset. To construct multivariate time series, the univariate equation is sampled multiple times to create a vector of independent time series, which can then be assigned as individual features. In this case, the amplitude is also taken from a Beta distribution, with all feature amplitudes re-scaled such that the maximum amplitude is 1. Typically, 200 timesteps are used to sample the time series, which is high enough to prevent aliasing in frequencies up to at least ω = 100. Note that the ground truth imputation values and train-test split follows that of the Human Activity dataset, i.e. missing values are randomly chosen and 20% of the dataset is assigned to the test set. The total size of the dataset is set to 3000. §.§ Benchmarks and Evaluation We compare our model with classic imputation: * Mean: replace the missing values with the global mean. * Median replace the missing values with the global median. We also compare our model with existing state-of-the-art methods: * CSDI <cit.>: utilizes score-based diffusion models conditioned on observed data, which is explicitly trained for imputation and is able to exploit correlations between observed values. * BRITS <cit.>: utilizes a bi-directional long short-term memory network and can handle multiple correlated missing values in time series. Additionally, we compare our method with three INR methods: * Auto-SIREN A single MLP with sine activations adapted from <cit.>, with an additional input for the latent code z, concatenated with the coordinates t. * Mod-SIREN A SIREN with fixed weights but conditionally predicted sine activation amplitudes via a modulation network and element-wise mapping between modulator weights and activation amplitudes. Here the auto-decoding variation is used which was proposed in the original work by <cit.> . * HN+SIREN A SIREN whose weights are conditionally predicted using a hypernetwork <cit.>, similar to the hypertime architecture proposed for time series generation by <cit.>. Instead of using a set-encoder, a hypernetwork takes the latent code z as input, and predicts all the parameters of the SIREN. §.§.§ Metrics To assess model performance we use three metrics, which assess out-of-sample imputation accuracy. To evaluate, the model prior is set to the non-missing data of the test set, then evaluation is carried out on the missing points from the same dataset. The metrics used can be summarised as follows: * Imputation error (MSE): MSE between the predicted (imputed) points in the test set compared to ground truth * Max imputation error (Max): measures the maximum error between missing data from the test set and those points evaluated using the trained model, then computes the average across all time series * Wasserstein2 (W2): Euclidean distance in feature space between learnt distributions, pairing missing points in the ground truth and output set so as to minimise the total summed distance between pairs. §.§.§ Implementation During training the model is optimised with respect to the following loss function, ℒ = ℒ_mse + ℒ_latent + ℒ_weights , which is consistent with that used in <cit.>, <cit.> and <cit.>. In e.q. <ref>, the first term is the standard MSE loss, defined via ℒ_mse = 1/N∑^N_i=1 (f_i - f̂_i)^2, whilst the second and third terms are regularisation terms on the (hypernet) latent code and SIREN network weights respectively, ℒ_latent = λ_Z1/N_Z∑^N_Z_j=1 z_i^2 ℒ_weights = λ_W1/N_W∑^N_W_k=1 W_K^2 . As in <cit.>, the number of latent dimensions, N_Z, is set to 40. In addition, gradient clipping is used during training, which was found to improve performance. λ_Z and λ_W are set to 1e-01 and 1e-04 respectively. The latent space prior is set to the normal distribution z ∼𝒩(0,1/N_W), and following <cit.> it was found that initialising the latent codes from a more compact distribution, z ∼𝒩(0,0.01), during inference led to better generalisation performance. Optimisation is carried out using the Adam optimiser, with a learning rate of 5e-05 for model parameters, and 1e-03 for latent variables. Note that in the fixed formulation, only latent variables passing through the hypernet are regularised. The SIREN network consists of three fully-connected layers with hidden dimension of 60, whilst the hypernet is composed of a single hidden layers and a hidden feature size of 128. The modulator is constrained to the same structure as SIREN, with the activations replaced by ReLU functions. It should be noted that only a limited amount of hyper-parameter tuning was carried out for all models used in this work, with model setups primarily following that used in the original works. §.§ Experimental Results on Real World datasets Table <ref> shows the imputation results evaluated with the real world datasets. MADS outperforms the baselines on both configurations of HAR dataset and shows competitive performance with CSDI on the Air Quality dataset. Additionally, it outperforms all baselines and all datasets on W2 metric. Both formulations of MADS perform competitively across the board, with the fixed variant notably out-performing all benchmarks in the majority of metrics. In addition, for both air quality metrics in which it was beaten the optimal value is within the uncertainty range of the fixed formulation. The base MADS formulation also performs well relative to the benchmarks, but for these datasets is unable to out-perform its sibling on any particular metric. The two datasets correspond to different data regimes: HAR showing less variation with fewer features and a changing missing ratio, and air quality a complex dataset with correlated missing values. MADS therefore shows robust performance in both settings, particularly given the limited amount of hyper-parameter tuning carried out. The model is also typically quicker to train than both BRITS and CSDI, achieving convergence in only a few minutes. Note that relative to the other models modulated SIREN shows significant instability during training. §.§ Experimental Results on Toy Dataset In the second set of experiments, the synthetic dataset introduced in section <ref> is utilised to systematically control the data regime and thus compare model performance across scenarios. The model parameters ω and γ are varied along with dimensionality and the addition of noise to assess relative performance. The following datasets, summarised in Table. <ref>, are constructed: * B-SIN: baseline dataset with low frequency (ω=5) and dimensionality (2) * M-SIN: mid frequency dataset (ω=30) with low dimensionality (2) * F-SIN: high frequency dataset (ω=100) with low dimensionality (2) * D-SIN: high dimensionality dataset (10) with low frequency (ω=5) * MD-SIN: mid frequency (ω=30), high dimensionality (10) * FD-SIN: high frequency (ω=100), high dimensionality (10) * N-SIN: noisy dataset with low frequency (ω=5) and low dimensionality (2) * L-SIN: long/time-dependent dataset (γ=1) with high frequency (ω=100) and low dimensionality (2) All datasets have no time dependence and zero noise unless explicitly stated. As explained in section <ref>, missing data is generated using random sampling, following the same approach as for the Human Activity datasetm with the missingness fraction set to 0.3. Table <ref> shows model performance results for the different configurations of the toy datasets, via the imputation MSE. As with the real datasets, MADS is found to perform competitively across all cases, and indeed both variants achieves the highest averaged rankings across all datasets. In these experiments the base MADS formulation performs slightly better than the fixed version, although the fixed variant still appears to out-perform its sibling for a higher frequency or high dimensional dataset. MADS is a notably more consistent performer across data regimes relative to the other approaches. Unlike these alternative models, the underlying assumptions on the dataset made by the INR-based approach is limited. This contrasts with CSDI, which outperforms all competitors for the simple baseline and high dimensionality cases, but struggles with higher frequency signals. It has been established for this model (see e.g. <cit.>, <cit.>) that the assumptions made on the noise scheduling setup of the training process can require significant amounts of tuning for different datasets, which potentially explains the variability in performance seen here. Similarly BRITS shows very different performance capabilities across datasets, seemingly excelling for the high frequency and long/non-stationary time series, but performing very poorly for the other cases. This is likely due to assumptions built into the model. For example, the assumption of causality implied by a recurrent model may ensure good performance in the non-stationary case, but the delayed backpropagation utilised to allow learning from imputed values during this process is likely to limit reconstruction capabilities and thus high performance in simpler cases. § CONCLUSIONS In this work, we proposed MADS, a novel method that uses INRs within a modulated hypernetwork architecture for time series imputation. The proposed model is compared to state-of-the-art benchmarks within the deep learning literature, across both real-world and toy multivariate datasets. We found the model to be robust across different data regimes and to outperform alternative approaches in the majority of metrics on the real-world datasets. In addition, across toy datasets the model is shown to have the highest average ranking relative to all baseline models. Disclaimer This paper was prepared for informational purposes by the Artificial Intelligence Research group of JPMorgan Chase & Coȧnd its affiliates (“JP Morgan”), and is not a product of the Research Department of JP Morgan. JP Morgan makes no representation and warranty whatsoever and disclaims all liability, for the completeness, accuracy or reliability of the information contained herein. This document is not intended as investment research or investment advice, or a recommendation, offer or solicitation for the purchase or sale of any security, financial instrument, financial product or service, or to be used in any way for evaluating the merits of participating in any transaction, and shall not constitute a solicitation under any jurisdiction or to any person, if such solicitation under such jurisdiction or to such person would be unlawful. named
http://arxiv.org/abs/2307.00729v1
20230703032123
An End-to-End Multi-Module Audio Deepfake Generation System for ADD Challenge 2023
[ "Sheng Zhao", "Qilong Yuan", "Yibo Duan", "Zhuoyue Chen" ]
cs.SD
[ "cs.SD", "cs.CL", "eess.AS" ]
2022 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). IJCAI 2023 Workshop on Deepfake Audio Detection and Analysis (DADA 2023), August 19, 2023, Macao, S.A.R 1]Sheng Zhao[ orcid=0000-0001-9376-1615, email=zhaosheng@lyxxkj.com.cn, ] [1] [1]NanJing LongYuan Information Technology Co.Ltd,NanJing,China 1,2]Qilong Yuan[ orcid = 0009-0009-5910-7957, email=yuanqilong@lyxxkj.com.cn, ] [1] [2]College of Computer and Information,Hohai University,NanJing,China 1]Yibo Duan[ orcid = 0000-0001-9512-3588, email=duanyibo@lyxxkj.com.cn, ] 1]Zhuoyue Chen [ orcid =0009-0007-9223-8962, email=chenzhuoyue@lyxxkj.com.cn, ] [1] [1]Corresponding author. [1]These authors contributed equally. The task of synthetic speech generation is to generate language content from a given text, then simulating fake human voice. The key factors that determine the effect of synthetic speech generation mainly include speed of generation, accuracy of word segmentation, naturalness of synthesized speech, etc. This paper builds an end-to-end multi-module synthetic speech generation model, including speaker encoder, synthesizer based on Tacotron2, and vocoder based on WaveRNN. In addition, we perform a lot of comparative experiments on different datasets and various model structures. Finally, we won the first place in the ADD 2023 challenge Track 1.1 with the weighted deception success rate (WDSR) of 44.97%. text-to-speech speech synthesis ADD challenge An End-to-End Multi-Module Audio Deepfake Generation System for ADD Challenge 2023 [ August 1, 2023 ================================================================================== § INTRODUCTION The second Audio Deepfake Detection Challenge (ADD 2023) <cit.> aims to spur researchers around the world to build new innovative technologies that can further accelerate and foster research on detecting and analyzing deepfake speech utterances. Among them, the task of Track 1.1 is to generate fake audio from some given text. At the same time, the generated fake audio from Track 1.1 is detected according to the detection model in Track 1.2 and the baseline RawNet2 <cit.>. In this paper, we propose an end-to-end audio generation system. The system, consisting of three parts, speaker encoder, synthesizer and vocoder, uses neural network models to directly convert text into speech signals. At present, speech synthesis technology is in a stage of rapid development, and many cutting-edge technologies are constantly emerging. * With the development of deep learning, end-to-end models have emerged, such as Tacotron <cit.>, which consists of an encoder and a decoder. The encoder is responsible for converting the input text into feature representation, and the decoder generates the corresponding speech signal from these features. The decoder models different parts of the input more finely based on the attention mechanism, improving the quality and naturalness of the synthesized speech. * Speech synthesis based on autoregressive model, such as WaveNet <cit.>, WaveRNN <cit.>. By controlling the input text, the speech samples of each time step are sequentially generated, and a complete speech signal is synthesized. * Speech synthesis techniques based on Generative Adversarial Networks(GAN) <cit.>, such as HiFi-GAN <cit.> and Fre-GAN <cit.>, pit the generator and the discriminator against each other, gradually improving the quality and fidelity of the generated speech. Nevertheless, traditional speech synthesis technology has shortcomings such as large data requirements, complicated training process, and unnatural sound quality. Inspired by SV2TTS <cit.>, this paper proposes an end-to-end autoregressive fake audio generation model. Specifically, the model is divided into three parts: speaker encoder, synthesizer and vocoder. Speaker encoder adopts a speech coding network constructed by BiLSTM <cit.> and a fully connected layer. The network provides speaker classification information for the synthesizer and realizes multi-speaker speech synthesis. Synthesizer adopts the architecture of Tacotron2 <cit.>, which can generate speech features with high quality. Vocoder adopts WaveRNN, one of the autoregressive model structures, which can gradually learn the change rule of the input Waveform and generate high-fidelity output Waveform. The rest of this paper is organized as follows: Section 2 describes our proposed method in detail. See Section 3 for the experimental results and Section 4 for the conclusion. § SYSTEM DESCRIPTION In this section, we will first introduce the framework of our system, and then describe the reasons for choosing it to generate fake speech. Our system framework is shown in Figure <ref>. It mainly consists of speaker encoder, synthesizer based on Tacotron 2, and vocoder based on WaveRNN. Subsequently, we will introduce the objective function used in training each of our modules. §.§ Speaker Encoder Specifically, two layers of bi-LSTM are used to capture the temporal dependencies and context information of the input speech sequence. Bi-LSTM can effectively learn variable-length sequence data and produce encoded intermediate speech feature representations. Two fully connected layers further enhance the abstraction of speech features and obtain low-dimensional speech embeddings. The fully connected layer can greatly compress the speech feature dimensions while retaining semantic information, producing highly abstract speech embedding representations. To eliminate the dimensional influence of speech embedding vectors and make the model focus on direction rather than length, speech embeddings are L2 normalized. Normalization can make cosine similarity calculation more accurate, thereby improving the performance of speech similarity judgment. This speech encoder can learn speech embeddings that express rich speaker-related information, providing speaker classification information for the Tacotron 2 framework to achieve multi-speaker speech synthesis. Tacotron 2 alone is difficult to distinguish speech features of different speakers. Combining this speech encoder can effectively solve this problem, achieving high-quality cross-speaker speech synthesis. Therefore, the speaker encoder we use can learn highly abstract speech embedding representations and encode rich speaker information. Through training, this model can learn the semantic information of speech and efficiently complete speaker classification, providing valuable speech expression methods for speech-related research. §.§ Synthesizer Synthesizer adopts the Tacotron 2 architecture, consisting of a PreNet(Double fully connected layer), CBHG(1-D Convolution Bank + Highway network <cit.> + bidirectional GRU) module, attention mechanism and decoder network (as shown in modules B, C and D in Figure <ref>). It is a powerful sequence-to-sequence model that can generate high-quality speech features. The PreNet and CBHG module together constitute the encoder, which can learn the high-level feature representation of the Mel spectrogram sequence and jointly encode it. The encoder provides the attention mechanism with ideal conditional information to achieve complete and accurate attention alignment. The CBHG module consists of a one-dimensional convolution layer, highway network and bidirectional GRU. It can learn the high-level feature representation and nonlinear dependencies of the Mel spectrogram. The output of the CBHG module provides the attention mechanism with accurate conditional information, which has an important influence on its performance. The attention mechanism realizes dynamic speech feature generation at each time step. It can learn the alignment relationship between the PreNet and CBHG module outputs to produce attention weights. The attention weights determine the content of the speech features generated at each time step. The PreNet can further improve the accuracy of speech feature prediction. It can eliminate the influence of the previous time step prediction, making the prediction at each time step more independent. The bidirectional LSTM can learn the historical and future information of the speech feature sequence jointly, generating an accurate prediction at each time step. It can achieve continuous and smooth speech feature generation. Synthesizer is a key component for achieving multi-speaker speech synthesis, providing a powerful speech feature generation module for the complete speech synthesis model. §.§ Vocoder The vocoder adopts the WaveRNN structure, which can learn high-dimensional conditional information of speech and generate highly realistic speech waveforms. WaveRNN is a recurrent neural network model that can gradually learn the changing rules of the input waveform and produce high-fidelity output waveforms. WaveRNN has the following advantages: * It can learn complex high-dimensional data distributions; * It has a memory mechanism and can learn long-term dependency relationships; * It is easy to train and fast convergence. This model can generate high-fidelity speech waveforms and provide powerful speech generation capabilities for speech synthesis systems. §.§ Training Loss For the speaker encoder, we adopt the cross-entropy loss<cit.>, which can enable the speaker encoder to learn speech embeddings that distinguish between different speakers. The cross-entropy loss can measure the difference between the predicted distribution and the true distribution, so that the model parameters are updated in the direction of reducing this difference. Therefore, The formula for speaker classification loss L_speaker is L_speaker = - ∑_i=1^N y_ilog p_i where N represents the number of different speakers in each batch of training data. the y_i is the one-hot encoded target speaker, and p_i is the predicted speaker probability distribution of the model. For synthesis loss, we adopt L1 loss which can enable the synthesizer to learn to generate output Mel-spectrum close to the target one. It can directly measure the difference between the predicted value and the target value, so that the model learns to minimize this difference. The L1 loss with periodic mask can enable the synthesizer to focus on the fine structure of the Mel-spectrum within the same pitch period. The combination of these two loss functions can enable the synthesizer to generate high-quality Mel-spectrum sequences. The formula for synthesis loss L_synthesis is: L_synthesis = |mel_target - mel_predict| where mel_target is the target Mel-spectrum, and mel_predict is the predicted Mel-spectrum of the model. The formula for L1 loss with periodic mask L_cyc is: L_cyc = |mel_target - mel_predict| * M where M is the periodic mask that can enhance the loss within the same pitch period. For the vocoder loss, firstly we quantize the audio waveform into discrete values, and then train the model using the cross-entropy loss, which can measure the gap between the probability distribution predicted by the model and the true label. Therefore, The formula for vocoder loss L_vocoder is L_vocoder = - ∑_i=1^M g_ilog p_i where M represents the discrete-valued dimensions for audio waveform quantization, the g_i is the gap between the predicted value and the true label, and p_i is the probability distribution predicted by our model. In summary, this model contains three modules: the speech encoder, the synthesizer and the vocoder. The three modules correspond to learning high-level speech semantics, intermediate speech features and low-level speech waveforms respectively. This model combines deep neural networks with waveform generation. It can both simulate the speech waveform and the internal features of the speech waveform, preserving richer speech information and generating highly realistic multi-speaker speech. § EXPERIMENTS §.§ Dataset We use the AISHELL3 <cit.> dataset to train the three modules of our model: the synthesizer, the speaker encoder and the vocoder. AISHELL-3 is a large-scale open-source Chinese speech dataset. It contains over 300,000 speech utterances . The speech samples are recorded at 16KHz with 16bit quantization, and the duration of each utterance is 5 to 15 seconds. For the speaker encoder, we use data augmentation methods to improve its generalization ability. We add noise(from MUSAN dataset <cit.>), reverberation(from RIRs dataset<cit.>) and speed perturbation on the training speech. These data augmentation methods can generate new training samples without changing the speech content, enrich the model's training data and enhance the generalization of the model. §.§ Comparison of different methods To verify the effectiveness of our proposed model, we conduct comparative experiments on different speech generation models and various datasets. The datasets used include AISHELL3 and LibriTTS, and the speech generation models are fastspeech <cit.> and the model proposed in this paper. We construct two datasets for testing, the one is generated by two models on AISHELL3, and the other is generated by two models on LibriTTS<cit.>. Both of them consist of 500 fake audios and 500 real audios. We also select three synthetic speech detection models to calculate the EER <cit.> of the test set, the detection models are RawNet2, Res-TSSDNet<cit.> and ECAPA-TDNN<cit.>. The RawNet2 model uses the pretrained model of ASVSpoof2021.Res-TSSDNet and ECAPA-TDNN are trained on the data from ADD2023 track 1.2. The relevant results are shown in Table <ref>. We can see that our proposed model is superior to the baseline model of fastspeech, and the EER on the LibriTTS test set reaches 58.71%. In addition, on some detection models, the EER of the proposed model is more than twice that of the comparison model. Moreover, in order to verify the influence of the internal structure of the model on the generated audio, we also conduct corresponding ablation experiments. Specifically, we replace WaveRNN of the proposed vocoder model with Hifi-GAN to conduct EER tests on the two datasets. The results are shown in Table <ref>, and it can be seen that the effect of using WaveRNN as a vocoder is better than HifiGAN on both datasets. On LibriTTS, using Hifi-GAN results in about 7% EER drop, and on AISHELL3, 15% EER drop. In order to improve the authenticity and similarity of the audio generated by the model, we stitch all the audio of the specific speaker as the voice input file of the model. we also conduct relevant contrast experiments to verify the effect of not performing audio stitching on the results. One set of generated audios is the result of the concatenated audio of the speaker as the voice line input, while the other set is the result of the single audio of the speaker as the voice line input, and the corresponding EER is calculated respectively, and the results are shown in Table <ref>. The results show that whether the audio in AISHELL3 or LibriTTS is used as input, the results after audio splicing are better than the results before audio splicing. §.§ Evaluation Track 1.1 requires teams to generate attack samples based on given text and speaker identity. When testing the quality of synthesized audio, we include real voices of the corresponding speakers in the dataset. This allows for better model evaluation through EER. The official competition uses all Track 1.2 detection models as a confrontation, and finally uses the deception success rate(DSR) for ranking, and can also evaluate the effectiveness of the model. Weighted deception success rates(WDSR) of each team are shown in Figure <ref>. We won the first place among all the participating teams. This also proves the rationality and effectiveness of our proposed methods and experiments. § CONCLUSION In this paper, we propose an end-to-end multi-module synthetic speech generation model. In addition, we have done a lot of comparative experiments on different datasets and model structures, which proves that our model is logical and effective. The model ranked first in the ADD 2023 Challenge . § ACKNOWLEDGEMENT We would like to express our sincere gratitude to all those who helped and supported us during the writing of this paper and development of the system. First, we would like to thank the organizers for hosting the ADD 2023 Challenge. The Challenge provided us a platform to test and improve our technology and capabilities in voice synthesis generation. We won the first prize in the challenge with our state-of-the-art system design. Moreover, we would especially like to thank our company Nanjing Longyuan Information Technology Co., Ltd. The computing resources provided by our company offered us the opportunity to develop the most advanced deep neural networks on large datasets.
http://arxiv.org/abs/2307.02881v2
20230706093645
Probabilistic and Semantic Descriptions of Image Manifolds and Their Applications
[ "Peter Tu", "Zhaoyuan Yang", "Richard Hartley", "Zhiwei Xu", "Jing Zhang", "Dylan Campbell", "Jaskirat Singh", "Tianyu Wang" ]
cs.CV
[ "cs.CV" ]
1 ] Probabilistic and Semantic Descriptions of Image Manifolds and Their Applications [ August 1, 2023 ================================================================================= This paper begins with a description of methods for estimating probability density functions for images that reflects the observation that such data is usually constrained to lie in restricted regions of the high-dimensional image space — not every pattern of pixels is an image. It is common to say that images lie on a lower-dimensional manifold in the high-dimensional space. However, although images may lie on such lower-dimensional manifolds, it is not the case that all points on the manifold have an equal probability of being images. Images are unevenly distributed on the manifold, and our task is to devise ways to model this distribution as a probability distribution. In pursuing this goal, we consider generative models that are popular in AI and computer vision community. For our purposes, generative/probabilistic models should have the properties of 1) sample generation: it should be possible to sample from this distribution according to the modelled density function, and 2) probability computation: given a previously unseen sample from the dataset of interest, one should be able to compute the probability of the sample, at least up to a normalising constant. To this end, we investigate the use of methods such as normalising flow and diffusion models. We then show that such probabilistic descriptions can be used to construct defences against adversarial attacks. In addition to describing the manifold in terms of density, we also consider how semantic interpretations can be used to describe points on the manifold. To this end, we consider an emergent language framework which makes use of variational encoders to produce a disentangled representation of points that reside on a given manifold. Trajectories between points on a manifold can then be described in terms of evolving semantic descriptions. § KEYWORDS: image manifold, normalising flow, diffusion model, maximum likelihood estimation, adversarial attacks and defences, semantic disentanglement § INTRODUCTION Understanding the complex probability distribution of the data is essential for image authenticity and quality analysis, but is challenging due to its high dimensionality and intricate domain variations  <cit.>. Seen images usually have high probabilities on a low-dimensional manifold embedded in the higher-dimensional space of the image encoder. On such a manifold, however, not every point can be decoded into a realistic image because of the unevenly distributed probabilities. Therefore, it is important to compute the probability in the latent space to indicate whether the corresponding image is in a high-density region of the space  <cit.>. This helps to distinguish seen images from unseen images, or synthetic images from real images. Some works train a discriminator with positive (real) and negative (synthetic) examples in the manner of contrastive learning <cit.> or analyse their frequency differences <cit.>. However, they do not address this problem using the probabilistic framework afforded by modern generative models. In this work, we directly calculate the log-probability of an image by utilising generative models that assign high probabilities to seen images and low probabilities to unseen images. We use normalising flow (NF) <cit.> and diffusion models <cit.> as image generators. NF models learn an image embedding space that conforms to a predefined distribution, usually a Gaussian. In contrast, diffusion models diffuse images with Gaussian noise in each forward step and learn denoising gradients for the backward steps. A random sample from the Gaussian distribution can then be analytically represented on an image manifold and visualised through an image decoder (for NF models) or denoiser (for diffusion models). These samples can be thought of as being composed of several meaningful semantic attributes. It is often desirable that these attributes be orthogonal to each other in the sample latent space so as to achieve a controllable and interpretable representation. In this work, we evaluate their robustness to adversarial and patch attacks  <cit.> in image space and defend against the same using semantic consistency with a purification loss. We also disentangle their semantics in the latent space by using a variational autoencoder (VAE) <cit.> in the framework of emergent languages (EL)  <cit.>. This allows the latent representation to be more robust, interpretable, compositional, controllable, and transferable. We organise this paper into three sections, each with their own experiments: log-likelihood estimation for a given image under normalising flow and diffusion models (see Section <ref>), adversarial attacks and defences in image space for preserving semantics (see Section <ref>), and semantic disentanglement in emergent languages for a latent representation of object attributes, using a proposed GridVAE model (see Section <ref>). § MAXIMUM LIKELIHOOD ESTIMATION WITH IMAGE GENERATORS We evaluate the log-probability of a given image using 1) a hierarchical normalising flow model, 2) a diffusion model adapted to taking large sampling steps, and 3) a diffusion model that uses a higher-order solution to increase generation robustness. §.§ Hierarchical Normalising Flow Models Normalising flow (NF) refers to a sequence of invertible functions that may be used to transform a high-dimensional image space to a low-dimensional embedding space corresponding to a probability distribution, usually Gaussian. Dimensionality reduction is achieved via an autoencoding framework. For the hierarchical model, the latent vector corresponding to the image x_i at each level i is computed as z_i = g_i (y_i) = g_i ∘ f_i (x_i) ∼𝒩(0, 1) , and the inversion of this process reconstructs the latent z'_i to x'_i as x'_i = f'_i ∘ g'_i (z'_i) , where the decoder f'_i and flow inverse function g'_i are inversions of the encoder f_i and flow function g respectively, and z'_i can be z_i or randomly sampled from 𝒩(0, 1). We illustrate hierarchical autoencoders and flows for rich and high-level spatial information with conditioning variables in either image space or latent space. In Fig. <ref>, we show a 4-level hierarchical normalising flow model, where each set of functions (f_i,g_i,g'_i,f'_i) corresponds to one level and where g'_i and f'_i are conditioned on the higher-level reconstruction, that is x'_1 = f'_1 ∘ g'_1 (z'_1 | f'_2 ∘ g'_2 (z'_2 | f'_3 ∘ g'_3 (z'_3 | f'_4 ∘ g'_4 (z'_4)))) . The model is learned in two phases: joint learning of all autoencoders {f_i,f'_i} and then joint learning of all flows {g_i,g'_i} with the pretrained autoencoders, for all i ∈{1, 2, 3, 4}. The loss function for autoencoder learning, denoted L_ae, is the mean squared error (MSE) between the reconstructed data and the processed data, and for the learning of flows the objective is to minimise the negative log-probability of y_i, denoted L_flow, such that the represented distribution of the latent variable is modelled to be the standard Gaussian distribution, from which a random latent variable can be sampled for data generation. Given N pixels and c channels (c=3 for an RGB image and c=1 for a greyscale image), x_i at level i can be represented as x_i = {x_ij} for all j ∈{1, ..., N}, the autoencoder loss is then given by ℒ_ae (x'_i, x_i) = 1/cN∑^N_j=1x'_ij - x_ij^2 , and the flow loss for the latent at level i is the negative log-probability of y_i, that is L_flow (y_i) = -log p_Y(y_i), using the change of variables as log p_Y(y_i) = log p_Z(z_i) + log|∇_Y g_i(y_i) | = log p_Z(z_i) + log| J_Y( g_i(y_i) ) | , where log p_Z(z_i) = -1/d_ilog1/(√(2 π))^d_iexp( -1/2z_i ^2 ) = 1/2log2 π + 1/2 d_iz_i ^2 , d_i is the dimension of the ith latent and J_X(·) computes the Jacobian matrix over the partial derivative X. Similarly, the log-probability of x_i at level i is log p_X (x_i) = log p_Z(z_i) + log|∇_X ( g_i ∘ f_i (x_i) ) | = log p_Z(z_i) + log| J_Y (g_i(y_i)) | + log| J_X (f_i(x_i)) | . Then, the log-probability of an image at level i with hierarchical autoencoders and flows from multiple downsampling layers, 𝐱_i+1 = d(𝐱_i) at level i, can be calculated with the chain rule as log p(𝐱_i) = ∑^i_j=1log p_X (𝐱_j) + log| J_X(d(𝐱_j-1)) |·1[ j>1 ] , where [·] is a binary indicator. §.§ Diffusion Models Differently from normalising flow models that sample in a low-dimensional embedding space due to the otherwise large computational complexity, diffusion models diffuse every image pixel in the image space independently, enabling pixelwise sampling from the Gaussian distribution. §.§.§ Non-Markovian Gaussian Sampling The forward and backward processes of a denoising diffusion probabilistic model (DDPM) <cit.> are defined as forward process: q (x_t |x_t-1) := N ( x_t; √(α_t) x_t-1, (1-α_t) I ) , backward process: p_θ (x_t-1 |x_t) := N ( x_t-1; μ_θ(x_t, t), Σ_θ(x_t, t) ) , where t is the discretised sampling time step and α_t is the diffusion weight. The forward process can also be represented with a one-step diffusion from the clean input x_0 as q (x_t |x_0) = N( x_t; √(α̅_t)x_0, (1-α̅_t) I) , where α̅_t = ∏^t_i=1α_i is the accumulated weight at time t. The process of obtaining x_t is Markovian due to requiring only x_t-1. However, the backward steps do not need to be Markovian to satisfy the Gaussian form, but can jump multiple steps Δ t at each transition, as shown in Def. <ref>. While the distribution of x_t-Δ t in the backward process is clearly not Gaussian (except when (t-Δ t) is sufficiently large, say at the final forward step T), this assumption is reasonable for small Δ t and facilitates an efficient backward process for which the data probability can be obtained, in contrast to the denoising diffusion implicit model (DDIM) <cit.>. Example images generated using this strategy are shown in Fig. <ref> for all Δ t ∈{1, 2, 10, 100}. A non-Markovian Gaussian backward process can be defined as p_θ(x_t-Δ t | x_t) = N( x_t - Δ t|μ̃_θ(x_t, t,Δ t), ϵ̃(t, Δ t) ) = N(x_t - Δ t| √(α̅_t - Δ t/α̅_t)x_t - α̅_t-Δ t - α̅_t/√(α̅_t α̅_t - Δ t( 1 - α̅_t ))ϵ^t_θ(x_t), (α̅_t-Δ t - α̅_t) (1 - α̅_t - Δ t)/α̅_t-Δ t (1 - α̅_t)I) , where ϵ^t_θ(x_t) is the estimated denoising gradient from a DDPM from time t to time 0, towards x_0 from x_t. For small Δ t, this is a good approximation to the diffusion backward process. We provide a justification for defining the backward process in this way in the appendix. §.§.§ Maximum Likelihood Estimation The probability of an image can be calculated by using the forward and backward processes for each step of a pretrained diffusion model. The joint probability p(x_0:T) and probability of the clean input x_0 can be calculated by using the forward and backward conditional probability, q(x_t+Δ t|x_t) and p_θ(x_t |x_t + Δ t) respectively, at each pair of sampling steps (t, t + Δ t) following the Markov chain rule as p(x_0:T) = q(x_0) ∏^T-1_t=0 q(x_t+Δ t|x_t) = p(x_T) ∏^T-1_t=0 p_θ(x_t |x_t + Δ t) , q(x_0) = p(x_T) ∏^T-1_t=0 p_θ(x_t |x_t + Δ t)/∏^T-1_t=0 q(x_t+Δ t|x_t) . Then the negative log-probability of the input image x_0 is -log q(x_0) = -log p(x_T) + ∑^T-1_t=0(log q(x_t+Δ t|x_t)_forward process-log p_θ(x_t |x_t + Δ t)_backward process) . Given an N-dimensional data point x and an element-wise Gaussian distribution N(μ_c, σ^2_c), we have -log p(x) = 1/2( N log 2 π + ∑^N_c=1logσ^2_c ) + 1/2∑^N_c=1(x_c - μ_c)^2/σ^2_c. Computing Eq. (<ref>) can be decomposed into three steps: 1) Calculating log p(x_T). Since x_0 is fully diffused after T forward steps, x_T follows the standard Gaussian distribution N(0, 1), and thus the negative log-likelihood only depends on the Gaussian noise. 2) Calculating log q(x_t+Δ t|x_t). Since x_t+1 = √(α_t+1)x_t + √(1 - α_t+1)ϵ_t → t+1, it is easy to derive that x_t+Δ t = √(α̅_t+Δ t/α̅_t)x_t + √(1 - α̅_t+Δ t/α̅_t)ϵ_t + Δ t , giving the mean and deviation of a Gaussian distribution as μ(x_t, t, Δ t) = √(α̅_t+Δ t/α̅_t)x_t , σ^2(t, Δ t) = 1 - α̅_t+Δ t/α̅_t . 3) Calculating log p_θ(x_t |x_t + Δ t). In Def. <ref>, the estimation of x_t follows a Gaussian distribution with μ(x_t+Δ t, t, Δ t) = (1 - α̅_t) √(α̅_t+Δ t)x_t+Δ t + (α̅_t - α̅_t+Δ t) x_t+Δ t - √(1 - α̅_t+Δ t)ϵ^t+Δ t_θ (x_t+Δ t)/√(α̅_t+Δ t)/√(α̅_t) (1 - α̅_t+Δ t) , σ^2(t, Δ t) = (α̅_t - α̅_t+Δ t) (1 - α̅_t)/α̅_t (1 - α̅_t+Δ t) . §.§ Experiments Experiments on the hierarchical normalising flow models and the diffusion models, corresponding to Section <ref> and Section <ref> respectively, are provided below. §.§.§ Experiments on Hierarchical Normalising Flow Models Probability Density Estimation. Fig. <ref> illustrates the probability density estimation on level 3 for an in-distribution dataset CelebA and an out-of-distribution dataset CIFAR10. The distribution of the latent variable z_i of CelebA is concentrated on a lower mean value than that of CIFAR10 due to the learning of z_i in the standard Gaussian distribution. Similarly, this distribution tendency is not changed in the image space illustrated by log p(x_i). In this case, outlier samples from the in-distribution dataset can be detected with a small probability in the probability estimation. [b]0.32 < g r a p h i c s > log p(z) [b]0.32 < g r a p h i c s > log p(y) [b]0.32 < g r a p h i c s > log p(x) Log-likelihood estimation using hierarchical autoencoders and flows. The encoder and flow are trained on CelebA and evaluated on CelebA and CIFAR10. The x-axis is log p(·) and the y-axis is the histogram density. In each subfigure, the first row is on the in-distribution dataset CelebA and the second row is on out-of-distribution CIFAR10, both are in the last row. In (A), log p(z) can detect outlier samples, and adding log| (·) | from NF and autoencoder does not significantly affect the distribution tendency, see (B) and (C). For better visualisation, samples with log p(·) less than -10,000 are filtered out. Random Image Generation. Image reconstructions with encoded latent variables and conditional images as well as random samples are provided in Fig. <ref>. For the low-level autoencoder and flow, say at level 1, conditioned on the sequence of decoded x_i for i={2,3,4}, the reconstruction of x_1 is close to the processed images although some human facial details are lost due to the downsampling mechanism, see Fig. <ref>(A). While randomly sampling {z_i} from the normal distribution at each level, the generated human faces are smooth but with blurry details in such as hair and chin and lack a realistic background. Image Super-resolution. [b]0.48 < g r a p h i c s > Reconstruction with {z_i} from encoders {g_i} and conditional images from {f'_i}. [b]0.48 < g r a p h i c s > Random generation with latent variables {z_i}∼𝒩(0,1) and conditional images from {f'_i}. Image generation on the end-to-end training of 4-level autoencoders and flows. With the jointly trained autoencoders and flows on CelebA, the images with low resolution, 3×8×8 (channel×height×width) and 3×16×16, are decoded to 3×64×64 with smooth human faces, see Fig. <ref>(A) and Fig. <ref>(B) respectively. The low-resolution image x_i is used as a condition image for 1) NF inverse {g'_i} to generate embedding code to combine with the randomly sampled z_i ∼𝒩(0, 1) and 2) decoders {f'_i} to concatenate with all upsampling layers in each decoder. This preserves the human facial details from either high levels or low levels for realistic image generation. As the resolution of the low-resolution images increases, the embedding code contains richer details. [!ht] [b]0.48 < g r a p h i c s > Resolution: 3×8×8 to 3×64×64 [b]0.48 < g r a p h i c s > Resolution: 3×16×16 to 3×64×64 Image super-resolution on dataset CelebA. The first column are low-resolution images, the second column are real images, and the rest are high-resolution images with latent variables {Z_i}∼𝒩(0, 1) conditioned on the low-resolution images and temperature 1.0. §.§.§ Experiments on Diffusion Models Log-likelihood Estimation on Point Samples. We evaluate the log-probability of each point of point samples including Swiss roll, circle, moon, and S shown in Fig. <ref>. Given a pretrained diffusion model on Swiss roll samples with 100 forward steps with each diffused by random Gaussian noise (see Fig. <ref>(A), the log-probability of the samples in Fig. <ref>(B) follows Eq. (<ref>) with Δ t=1 and indicates higher probability and density on seen or similar samples than unseen ones. In Figs. <ref>(B)-(C), the mean value of the Swiss roll sample achieves a higher mean value, -0.933, and a higher histogram density, 0.7, than the others. As the difference in the sample shape from the Swiss roll increases, the log-likelihood decreases, as shown in the bar chart in Fig. <ref>(C). It indicates that sampling from a low-density distribution is unable to reverse the diffusion step to obtain a realistic sample from the training set. DDPM Sampling with Large Steps. [b]0.24 < g r a p h i c s > Δ t=1 [b]0.24 < g r a p h i c s > Δ t=2 [b]0.24 < g r a p h i c s > Δ t=10 [b]0.24 < g r a p h i c s > Δ t=100 Image generation from our modified DDPM with step size Δ t. Samples follow a Gaussian distribution, as shown in Def. <ref>. Fine details are obtained even for very large steps (Δ t = 100). While Fig. <ref> uses Δ t = 1 as the standard DDPM sampling process, it is feasible to sample with a fairly large step without losing the sample quality. This enables sampling from the Gaussian distribution for the log-likelihood estimation with less running time. To visualise the image quality, we evaluate the samples on CelebA dataset by using a pretrained diffusion model with 1,000 forward diffusion steps. In Fig. <ref>, the sampling uses Def. <ref> with an increase step in {2, 10, 100} while the samples have a high quality for Δ t={2, 10} and a fair quality for Δ t=100. Higher-order Solution Stabilises Sampling. While sampling with a large step Δ t can sometimes cause bias from the one with a small Δ t, the higher-order method, mainly the Runge–Kutta method (RK4) <cit.> in our case, effectively alleviates such a bias. See the appendix for our implementation of RK4. We evaluate both the point samples and human face images from CelebA. In Fig. <ref>, compared with the sample by using DDPM, RK4 with DDPM inference achieves less noise at Δ t={2, 5, 10}. For Δ t=20, RK4 performs expectedly worse because it only applies 5 sampling steps while the training is on (T=100) diffusion steps. In Fig. <ref>, we apply DDIM as the inference method for RK4 to deterministically compare the samples with DDIM. As Δ t increases from 1 to 100, many of the samples using DDIM lose the image consistency with the samples at Δ t=1; however, most of the samples using RK4 still retain the image consistency. This indicates the robustness of applying RK4 with a large sampling step. [!ht] [b]0.24 < g r a p h i c s > DDIM@1 [b]0.24 < g r a p h i c s > DDIM@2 [b]0.24 < g r a p h i c s > DDIM@10 [b]0.24 < g r a p h i c s > DDIM@100 [b]0.24 < g r a p h i c s > RK4@1 [b]0.24 < g r a p h i c s > RK4@2 [b]0.24 < g r a p h i c s > RK4@10 [b]0.24 < g r a p h i c s > RK4@100 Random image generation using DDIM and RK4 with DDIM as inference @ time step Δ t={1, 2, 10, 100}. The RK4 sampling method is more robust than DDIM, especially at Δ t=100, with a higher image consistency than those at Δ t=1. § ADVERSARIAL ATTACKS AND DEFENCES §.§ Bounded Patch Attack In contrast to full image-level attacks like ℓ_2 and ℓ_∞ bounded attacks, patch attacks, which are ℓ_0 bounded attacks, aim to restrict the number of perturbed pixels. These attacks are more feasible to implement in real-world settings, resulting in border impacts. Below, we conduct an initial investigation into the defence against patch attacks by leveraging the knowledge of the data manifold. §.§.§ Formulation We briefly describe the adversarial purification framework in <cit.> and focus more on extending the framework to defend the ℓ_0 bounded attacks. We define the real-world high-dimensional data as x∈ℝ^n which lies on a low-dimensional manifold ℳ diffeomorphic to ℝ^m with m ≪ n. We define an encoder function f:ℝ^n →ℝ^m and a decoder function f^†:ℝ^m →ℝ^n to form an autoencoder. For a point x∈ℳ, f^† and f are approximate inverses. We define a discrete label set ℒ of c elements as ℒ={1,...,c} and a classifier in the latent space as h: ℝ^m →ℒ. The encoder maps the image x to a lower-dimensional vector z = f(x) ∈ℝ^m and the functions f and h together form a classifier in the image space h(z) = (h∘f)(x) ∈ℒ. We define three sets of parameters: 1) ϕ parametrises the encoder distribution, denoted as q_ϕ(z | x), 2) θ parametrises the decoder distributions, represented as p_θ(x | z), and 3) ψ parametrises the classification head, given by h_ψ(z). These parameters are jointly optimised with respect to the ELBO loss and the cross-entropy loss as shown in Eq. (<ref>), where λ is the trade-off term between the ELBO and the classification loss. By adopting this formulation, we notice a remarkable semantic consistency between the decoder and the classifier. Specifically, when making predictions on adversarial examples, if the predicted label is “bag", we observe that the reconstructed image tends to resemble a “bag" as well. This phenomenon is illustrated in Fig. <ref>(D) and Fig. <ref>. max_θ ,ϕ, ψ𝔼_z∼ q_ϕ(z | x )[log p_θ(x | z)] - D_KL[q_ϕ(z | x ) p(z)]_ELBO (lower bound oflog p_θ(x) ) + λ𝔼_z∼ q_ϕ(z | x )[y^⊺logh_ψ(z)]_Classification loss . To defend the imagewise attack, a purification vector can be obtained through the test-time optimisation over the ELBO loss. For example, given an adversarial example x_ adv, a purified sample can be obtained by x_ pfy = x_ adv + ϵ_ pfy with ϵ_ pfy = _ϵ∈𝒞_ pfy𝔼_z∼ q_ϕ(z | x_ adv + ϵ)[log p_θ(x_ adv + ϵ| z)] - D_ KL[q_ϕ(z | x_ adv + ϵ) p(z)] , where 𝒞_ pfy={ϵ∈ℝ^n |x_ adv + ϵ∈ [0,1]^n and ϵ_p≤ϵ_ th} which is the feasible set for purification and ϵ_ th is the purification budget. When compared to ℓ_∞ attacks, ℓ_0 attacks, such as the adversarial patch attacks, introduce larger perturbations to the perturbed pixels. Therefore, we decide to remove the feasible set constraints 𝒞_ pfy for the patch-attack purification. Without these constraints, the purified examples can take on any values within the image space. In our experiments, however, we observed intriguing phenomena, see the results below. §.§ Experiments We use the gender classification model <cit.> to demonstrate the adversarial purification of ℓ_0 bounded attacks. To ensure that the adversarial examples do not alter the semantic content of the images, we restrict the perturbation region to the forehead of a human face. The patch for perturbation is a rectangular shape measuring 16×32, see Fig. <ref>. For the patch attacks, we conduct 2,048 iterations with step size 1/255 using PGD <cit.> and PGD-NAG (Nesterov Accelerated Gradient) <cit.>. In Table <ref>, the purification is carried out through 256 iterations with the same step size. § SEMANTIC DISENTANGLEMENT ON MANIFOLD §.§ GridVAE for Clustering and Disentanglement §.§.§ Formulation A variational autoencoder (VAE) <cit.> is a neural network that maps inputs to a distribution instead of a fixed vector. Given an input x, the encoder with neural network parameters ϕ maps it to a hidden representation z. The decoder with the latent representation z as its input and the neural network parameters as θ reconstructs the output to be as similar to the input x. We denote the encoder q_ϕ (z|x) and decoder p_θ (x|z). The hidden representation follows a prior distribution p(z). With the goal of making the posterior q_ϕ (z|x) close to the actual distribution p_θ (z|x), we minimise the Kullback-Leibler divergence between these two distributions. Specifically, we aim to maximise the log-likelihood of generating real data while minimising the difference between the real and estimated posterior distribution by using the evidence lower bound (ELBO) as the VAE loss function L(θ, ϕ) = -log p_θ(x) + D_KL(q_ϕ(z|x)||p_θ(z|x)) = -𝔼_z∼ q_ϕ(z|x)log p_θ(x|z) + D_KL(q_ϕ(z|x) p_θ(z)) , where the first term is the reconstruction loss and the second term is the regularisation for q_ϕ(z|x) to be close to p_θ(z). The prior distribution of z is often chosen to be a standard unit isotropic Gaussian, which implies that the components of z should be uncorrelated and hence disentangled. If each variable in the latent space is only representative of a single element, we assume that this representation is disentangled and can be well interpreted. Emergent language (EL) <cit.> is hereby introduced as a language that arises spontaneously in a multi-agent system without any pre-defined vocabulary or grammar. EL has been studied in the context of artificial intelligence and cognitive science to understand how language can emerge from interactions between agents. EL has the potential to be compositional such that it allows for referring to novel composite concepts by combining individual representations for their components according to systematic rules. However, for EL to be compositional, the latent space needs to be disentangled <cit.>. Hence, we integrate VAE into the EL framework by replacing the sender LSTM with the encoder of the VAE noting that the default LSTM encoder will entangle the symbols due to its sequential structure where the previous output is given as the input to the next symbol. In contrast, the symbols can be disentangled with a VAE encoder. To achieve disentangled representations in EL, the VAE encoder must be able to cluster similar concepts into discrete symbols that are capable of representing attributes or concepts. The standard VAEs are powerful, but their prior distribution, which is typically the standard Gaussian, is inferior in clustering tasks, particularly the location and the number of cluster centres. In the EL setting, we desire a posterior distribution with multiple clusters, which naturally leads to an MoG prior distribution with K components p(z)= 1/K∑_k=1^K𝒩(z|μ_k,σ_k^2) . We choose the μ_k to be located on a grid in a Cartesian coordinate system so that the posterior distribution clusters can be easily determined based on the sample's distance to a cluster centre. We refer to this new formulation as GridVAE, which is a VAE with a predefined MoG prior on a grid. The KL-divergence term in Eq. (<ref>) can be re-written as D_KL( q_ϕ(z|x) p_θ(z)) = 𝔼_x∼ p(x)𝔼_q_ϕ(z|x) [log p(z) - log q_ϕ(z|x)] . The log probability of the prior can be easily calculated with the MoG distribution, and we only need to estimate the log probability of the posterior using a large batch size during training. By using a GridVAE, we can obtain a posterior distribution with multiple clusters that correspond to the same discrete attribute, while allowing for variations within the same cluster to generate different variations of the attribute. §.§.§ Experiments We evaluate the clustering and disentanglement capabilities of the proposed GridVAE model using a two-digit MNIST dataset <cit.> consisting of digits 0 to 5. Each digit is from the original MNIST dataset, resulting in a total of 36 classes [00, 01, 02, ..., 55]. To extract features for the encoder, we use a 4-layer ResNet <cit.> and its mirror as the decoder. The VAE latent space is 2-dimensional (2D), and if the VAE learns a disentangled representation, each dimension of the 2D latent space should represent one of the digits. We use a 2D mixture of Gaussian (MoG) as the prior distribution, with 6 components in each dimension centred at integer grid points from [-2, -1, 0, 1, 2, 3], that is the coordinates for the cluster centres are [(-2, -2), (-2, -1), ..., (3, 3)]. The standard deviation of the mixture of Gaussian is 1/3. After training the model, we generate a scatter plot of the test set latent space, as shown in Fig. <ref>. Since the prior is a mixture of Gaussian on the grid points, if the posterior matches the prior, we can simply draw a boundary in the middle of two grid points, illustrated by the red lines in Fig. <ref>. With the trained model, one can sample in the latent space for image generation. In Figs. <ref> (A)-(B), when we decode from the cluster centres (i, j): in (A) we keep j=0 and change i from -2 to 3, while in (B) we keep i=0 and change j from -2 to 3. The latent space is disentangled with respect to the two digits - the first dimension of the latent space controls the first digit, while the second dimension controls the second digit. Each of the cluster centres corresponds to a different number. Figs. <ref>(C)-(D) show images generated within the cluster centred at (1, 1), that is the pairs of number “44". If we slightly modify one of the dimensions, it corresponds to different variations of the number “4" along this dimension, while keeping the other digit unchanged. Overall, these results demonstrate the effectiveness of the proposed GridVAE model in clustering and disentangling the latent space on the two-digit MNIST dataset. §.§ Scaling Up GridVAE In Sec. <ref>, the two-digit MNIST dataset lies in a 2-dimensional latent space. However, many real-world datasets would require a much higher dimensional space. §.§.§ Addressing Higher Dimensional Latent Space Discretising a continuous space, such as in GridVAE, is challenging due to the curse of dimensionality <cit.>. This refers to the exponential growth in the number of clusters as the number of dimensions increases, which leads to a computational challenge when dealing with high-dimensional latent space. For example, when applying GridVAE to reconstruct images of the CelebA <cit.> dataset to learn the 40 attributes, we need a 40-dimensional latent space with two clusters in each dimension to represent the presence or absence of a given attribute. Firstly, parametrising the mixture of Gaussian prior p(z)= ∑_k=1^K𝒩(z|μ_k,σ_k^2) / K over 40 dimensions is prohibitively expensive as K=2^40≈ 1.1×10^12. Secondly, the assumption of equal probability for the components, which was appropriate for the simple 2-digit MNIST dataset, is no longer valid. This is because the attributes in the CelebA dataset are not uniformly distributed, and some combinations may not exist. For instance, the combination of “black hair" + “blonde hair" + “brown hair" + “bald" is impossible due to attribute conflicts. To address this issue, we use the proposed loss function in Eq. (<ref>) incorporating relaxation. To avoid pre-parametrising p(z) over 40 dimensions, we have implemented a dynamic calculation of the KL-divergence between q_ϕ and p_θ, whereby only the cluster that is closest to the latent space representation is considered, as illustrated in Fig. <ref>. This means that clusters to which the data point does not belong do not affect its distribution, and the MoG distribution is simplified to a multivariate Gaussian as D_KL(p_1|| p_2) = 1/2[log|Σ_2|/| Σ_1| - n + tr( Σ_2^-1Σ_1 ) + (μ_2 - μ_1)^T Σ_2^-1(μ_2 - μ_1)] , where p_1=q_ϕ(z|x)=𝒩(z|μ(x),Σ(x)) Σ=diag(σ_1^2,…,σ^2_n), p_2 = 𝒩(μ_2,Σ_2), μ_2=(μ_1), and Σ_2=diag(σ_0^2,…,σ^0_n) with the round function (·) for the closest integer. The key step here is that the round function dynamically selects the cluster centre closest to μ_1, and σ_0 is a pre-defined variance for the prior distribution. It should be chosen so that two clusters next to each other have a reasonable degree of overlap, for example, σ_0=1/16 in some of our following experiments. The KL-divergence term becomes D_KL( q_ϕ(z|x) p_θ(z) ) = 1/2[log|Σ_2|/|Σ_1| - n + tr( Σ_2^-1Σ_1 ) + (μ_2 - μ_1)^T Σ_2^-1(μ_2 - μ_1)] = 1/2[log∏_iσ_0^2-log∏_iσ_i^2 - n + ∑_iσ_i^2/σ_0^2 + ∑_i(μ_i-( μ_i))^2/σ_0^2] = 1/2[∑_i=1^n(logσ_0^2 - logσ_i^2 - 1) + ∑_i=1^nσ_i^2 + (μ_i-(μ_i))^2/σ_0^2] . By adopting Eq. (<ref>), we can significantly reduce the computational complexity of the model, even for a high-dimensional latent space, bringing it to a level comparable to that of a standard VAE. It is worth noting that the global disentanglement may no longer be guaranteed. Rather, the model only provides local disentanglement within the proximity of each cluster. Upon training the GridVAE with a 40-dimensional latent space by using the proposed Eq. (<ref>) on the CelebA dataset, we observe some intriguing disentanglement phenomena. Fig. <ref> showcases the disentanglement of two latent space dimensions, where the first dimension governs one attribute and the second dimension determines another one. Combining these two dimensions leads to simultaneous attribute changes in the generated images. [!ht] [b]0.49 < g r a p h i c s > + < g r a p h i c s > = < g r a p h i c s > [b]0.49 < g r a p h i c s > + < g r a p h i c s > = < g r a p h i c s > Two generated examples using linear sampling in the latent space. The top row fixes the dimensions and changes the first one from -0.5 to +1.5. The middle row fixes the dimensions and changes the second one from -0.5 to +1.5. The bottom row changes the first and second dimensions from -0.5 to +1.5. An inherent limitation of this unsupervised approach is that while the latent space appears to be locally disentangled for each image, the same dimension may have different semantic interpretations across different images. To address this issue, we introduce all 40 attributes of the dataset during the training. This should establish an upper bound on the disentanglement. §.§.§ From Unsupervised to Guided and Partially Guided GridVAE To this end, we described an unsupervised approach to learning the latent space representation of images. However, for datasets like CelebA with ground truth attributes, we can incorporate them into the latent space to guide the learning. Specifically, we extract the 40-dimensional attribute vector indicating the presence or absence of each feature for each image in a batch and treat it as the ground truth cluster centre μ_i^gt. Hence, instead of rounding the latent space representation μ_i in Eq. (<ref>), we replace it with μ_i^gt. One limitation of this approach is the requirement of the ground truth attributes for all images, which may not always be available or feasible. Additionally, it is important to note that while we refer to this approach as “guided", the given attribute information only serves in the latent space as the cluster assignment prior, and the VAE reconstruction task remains unsupervised. This differs from classical supervised learning, where the label information is the output. Furthermore, in our approach, no specific coordinate in the latent space is designated for the input. Instead, we provide guidance that the sample belongs to a cluster centred at a certain point in the latent space. This guided learning framework can be extended to a subset of the 40 attributes or a latent space with more dimensions. For clarity, we will refer to the latter as “partially guided" to distinguish it from the commonly used “semi-supervised" by using a subset of the labelled dataset. We conduct the experiments using attribute information as latent space priors and obtain the following findings for the guided approach: (a) GridVAE is able to cluster images accurately based on their attributes and the same dimension has the same semantic meaning across different images. For instance, dimension 31 represents “smile". (b) GridVAE could not generate images for clusters that have little or no representation in the training set. For example, the attempt of generating an image of a bald female by constraining GridVAE to the “female" and “bald" clusters is not achievable for an accurate representation. (c) Some attributes are more universal across different images, such as their ability to add a smile to almost any face. However, other attributes, such as gender, are not always modifiable. This could be caused by attributes that are not independent and can be influenced by others. Universal attributes, such as “smile", seem to primarily locate locally in the image region without interruption from the other attributes, see Fig. <ref>. [!ht] [b]0.49 < g r a p h i c s > [b]0.49 < g r a p h i c s > Generated images from sampling in the latent space. Keeping all other dimensions fixed and changing dimension (A) 31 (smile) from -0.5 to +1.5, or (B) 20 (male) from -0.5 to +1.5. [!ht] [b]0.49 < g r a p h i c s > [b]0.49 < g r a p h i c s > Partially guided GridVAE generation from the latent attributes which are not provided during training. The left and right rows are with the dimensions 20 and 31 respectively. To further illustrate the incompleteness and correlation among the attributes in the CelebA dataset, we use a subset of the given attributes. We choose 38 out of the 40 attributes, excluding attributes 20 (female/male) and 31 (not smiling/smiling). Fig. <ref> shows that the GridVAE cannot learn the omitted attributes. This highlights the interdependence of different attributes in the latent space. §.§ Combining Manifolds of GridVAE Disentangled Attribute and Facial Recognition After achieving a disentangled latent space, one may still wonder about the usefulness of a semantic description of a manifold. One can consider the scenario where another manifold, such as a facial recognition manifold, is learned. By studying these two manifolds jointly, we can gain insights to make the models more explainable and useful. One potential application is to better understand the relationship between facial attributes and facial recognition. By analyzing the disentangled latent space of facial attributes and the manifold learned for facial recognition, we can potentially identify which attributes are the most important for recognising different faces. This understanding can then be used to improve the performance of facial recognition models as well as explain the model decisions. For instance, FaceNet <cit.> directly learns a mapping from face images to a compact Euclidean space where distances correspond to a measure of face similarity. To discover the semantic structure of this manifold with x as binary attributes, we can follow these steps: * Build a face recognition manifold using contrastive learning. * Use the CelebA dataset with ground truth attribute labels (40 binary values). * Insert CelebA samples onto the recognition manifold. * Find the nearest neighbour for each CelebA sample using the face recognition manifold coordinates. * For each attribute in x, compute p(x) over the entire CelebA dataset. * For each attribute in x, compute p(x| x of nearest neighbour = 0). * For each attribute in x, compute the KL divergence between p(x) and p(x| x of nearest neighbour = 0). * Identify attributes with the largest KL divergence. Fig. <ref> demonstrates that the KL Divergence between p(x) and p(x| x of nearest neighbour = 0) is significantly larger for certain attributes, such as “male", “wearing lipstick", “young" and “no beard", than the others. This indicates that the neighbourhood structure of the facial recognition manifold is markedly different from the distribution of these attributes in the entire dataset. These findings highlight the importance of the joint study of different manifolds to gain a more profound understanding of the relationship between the attributes and the recognition tasks. By incorporating it into the models, we can potentially improve the performance of facial recognition models and also enhance their interpretability. [t] [b]0.66 < g r a p h i c s > p(x) and p(x| x of nearest neighbour = 0) distributions. [b]0.33 < g r a p h i c s > KL divergence. Semantic structure of the face recognition manifold by jointly studying the attribute manifold and the facial recognition manifold. § CONCLUSION This work studies the image geometric representation from high-dimensional spatial space to low-dimensional latent space on image manifolds. To explore the image probability distribution with the assumption that real images are usually in a high-density region while not all samples from the distribution can be represented as realistic images, we incorporate log-likelihood estimation into the procedures of normalising flow and diffusion models. Patch attacks and defences are then applied in the image space to test the semantic consistency to evaluate the reconstruction robustness. We also consider an EL framework with the proposed GridVAE model to disentangle the elements of the latent variable on an image manifold for controllable and interpretable presentations with orthogonal semantics. Experiments show the effectiveness of probability estimation in distinguishing seen examples from unseen ones, the quality and the efficiency with large sampling steps in image generation, the well-preserved semantic consistency with patch attacks, and meaningful representations of varying specific element(s) of the latent variable to control object attribute(s) in the image space. § CONFLICT OF INTEREST STATEMENT The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. § AUTHOR CONTRIBUTIONS P. Tu and R. Hartley are the principal investigators of this project. Z. Xu, R. Hartley, J. Zhang, and D. Campbell contribute to the maximum likelihood estimation section; Z. Yang and P. Tu contribute to the attacks and defences section; Y. Fu contributes to the semantic disentanglement section. All authors contribute to discussions and proofreading in this work. § FUNDING This work is supported by the DARPA geometries of learning (GoL) project under the grant agreement number HR00112290075. § ACKNOWLEDGMENTS We thank Amir Rahimi for his contribution to the code and discussion of the normalising flow models. § SUPPLEMENTAL DATA arxiv Please refer to the appendix. Please refer to “Supplementary Material". § DATA AVAILABILITY STATEMENT Datasets used in this work are available online, and our code with demonstrated samples will be released upon publication. For further inquiries, please contact the corresponding author. arxiv § APPENDIX § JUSTIFICATION FOR DEFINITION 2.1 A non-Markovian Gaussian backward process can be defined as p_θ(x_t-Δ t | x_t) = N( x_t - Δ t|μ̃_θ(x_t, t,Δ t), ϵ̃(t, Δ t) ) = N(x_t - Δ t| √(α̅_t - Δ t/α̅_t)x_t - α̅_t-Δ t - α̅_t/√(α̅_t α̅_t - Δ t( 1 - α̅_t ))ϵ^t_θ(x_t), (α̅_t-Δ t - α̅_t) (1 - α̅_t - Δ t)/α̅_t-Δ t (1 - α̅_t)I) , where ϵ^t_θ(x_t) is the estimated denoising gradient from a DDPM from time t to time 0, towards x_0 from x_t. For small Δ t, this is a good approximation to the diffusion backward process. We start with the observation from <cit.> that the computation of a sample x_t - Δ t at time (t - Δ t), conditioned on x_t and x_0, follows a Gaussian form. This can be seen as follows q(x_t - Δ t | x_t, x_0) = q(x_t-Δ t, x_t, x_0)/q(x_t, x_0) = q(x_t | x_t-Δ t, x_0) q(x_t - Δ t | x_0)/q(x_t | x_0) = N( x_t; √(α̅_t / α̅_t-Δ t)x_t - Δ t, (1 - α̅_t / α̅_t - Δ t) I) N( x_t-Δ t; √(α̅_t - Δ t)x_0, (1 - α̅_t - Δ t) I)/N( x_t; √(α̅)x_0, (1 - α̅_t) I) ∝ exp{ -1/2[ ( x_t - √(α̅_t / α̅_t-Δ t)x_t - Δ t)^2/1 - α̅_t / α̅_t-Δ t + ( x_t-Δ t - √(α̅_t-Δ t)x_0 )^2/1 - α̅_t-Δ t - ( x_t - √(α̅_t)x_0 )^2/1 - α̅_t] } ∝ exp{ -1/2[ ( α̅_t / α̅_t - Δ t/1 - α̅_t / α̅_t - Δ t + 1/1 - α̅_t-Δ t) x^2_t - Δ t - 2 ( √(α̅_t / α̅_t - Δ t)/1 - α̅_t / α̅_t - Δ tx_t + √(α̅_t - Δ t)/1-α̅_t-Δ tx_0 ) x_t - Δ t] } = exp{ -1/2α̅_t - Δ t (1 - α̅_t)/(α̅_t-Δ t - α̅_t)(1 - α̅_t-Δ t)[ x^2_t - Δ t - 2 (1-α̅_t - Δ t) √(α̅_t-Δ tα̅_t)x_t + (α̅_t-Δ t - α̅_t) √(α̅_t-Δ t)x_0/α̅_t-Δ t (1 - α̅_t)x_t - Δ t] } ∝ N( x_t - Δ t; (1 - α̅_t-Δ t) √(α̅_t)x_t + (α̅_t-Δ t - α̅_t) x_0/√(α̅_t-Δ t) (1 - α̅_t), (α̅_t-Δ t - α̅_t) (1 - α̅_t - Δ t)/α̅_t-Δ t (1 - α̅_t)I). While the forward process <cit.> computes x_t = √(α̅_t)x_0 + √(1 - α̅_t)ϵ_t , giving that x_0 = 1/√(α̅_t)( x_t - √(1 - α̅_t)ϵ_t ) , the exact x_0 is infeasible to be computed in the backward process. However, given a diffusion model ϵ^t_θ (x_t) that can estimate the denoising gradient from x_t, we have a function for estimating x_0, given by f_θ^t (x_t) = 1/√(α̅_t) (x_t - √(1 - α̅_t)ϵ^t_θ (x_t)) . We can now define a non-Markovian backward process as follows p_θ(x_t-Δ t | x_t) = q(x_t - Δ t | x_t, f_θ^t (x_t)) . The expanded definition is then obtained by substituting Eq. (<ref>) into Eq. (<ref>). § THE RUNGE–KUTTA METHOD FOR DIFFUSION MODELS We first revisit the Runge–Kutta method (RK4) which solves initial value problems <cit.>. Given an initial value problem as f(x_t, t)=dx_t / dt, where x_t is associated with time t, the estimation of x_t at time (t + Δ t) with step size Δ t is computed by x_t+Δ t = x_t + Δ t/6( k_1 + 2 k_2 + 2 k_3 + k_4 ) , where k_1 = f (x_t, t ) , k_2 = f ( x_t + Δt/2 k_1 , t + Δt/2 ) , k_3 = f ( x_t + Δt/2 k_2 , t + Δt/2 ) , k_4 = f ( x_t + Δt k_3 , t + Δt ) . For an initial value x_0 at time (t=0), one can estimate x_T iteratively by using Eq. (<ref>) from (t=0) to the terminate time (t=T). For the backward process of a diffusion model, the reverse step can be written as x_t- Δ t = g ( x_t, ϵ_t - Δ t, t - Δ t ) , where g(·) refers to the backward step in DDPM <cit.> with Gaussian noises or DDIM <cit.> without Gaussian noises and ϵ_t - Δ t is the reversing gradient from x_t to x_t - Δ t, denoted as “model prediction". To apply RK4 to the reverse step, we follow the same rule as Eq. (<ref>) but change the moving step as ϵ_t - Δ t = 1/6( k_1 + 2 k_2 + 2 k_3 + k_4 ) , where k_1 = f (x_t, t ) , k_2 = f ( g (x_t, k_1 , t - Δt/2 ), t - Δt/2 ) , k_3 = f ( g (x_t, k_2 , t - Δt/2 ), t - Δt/2 ) , k_4 = f ( g (x_t, k_3 , t - Δt ), t - Δt ) , where · rounds to the smaller integer due to the discretization of the sampling time space. The difference of moving steps between Eq. (<ref>) and Eq. (<ref>) is the multiplication of Δ t or (Δ t / 2) to k_i for i={1,2,3,4}. Empirically, applying these multiplications in diffusion sampling leads to strongly unrealistic image generation even with small Δ t. We thus hypothesize that the moving steps are already considered in g(·) because of the joint effect of the sampling coefficients in g(·), ϵ_t, ϵ_t-Δ t / 2, and ϵ_t-Δ t. references
http://arxiv.org/abs/2307.01849v1
20230704175929
Crossway Diffusion: Improving Diffusion-based Visuomotor Policy via Self-supervised Learning
[ "Xiang Li", "Varun Belagali", "Jinghuan Shang", "Michael S. Ryoo" ]
cs.RO
[ "cs.RO", "cs.CV", "cs.LG" ]
Boundary Flat Bands with Topological Spin Textures Protected by Sub-chiral Symmetry Zhongbo Yan August 1, 2023 =================================================================================== Sequence modeling approaches have shown promising results in robot imitation learning. Recently, diffusion models have been adopted for behavioral cloning, benefiting from their exceptional capabilities in modeling complex data distribution. In this work, we propose Crossway Diffusion, a method to enhance diffusion-based visuomotor policy learning by using an extra self-supervised learning (SSL) objective. The standard diffusion-based policy generates action sequences from random noise conditioned on visual observations and other low-dimensional states. We further extend this by introducing a new decoder that reconstructs raw image pixels (and other state information) from the intermediate representations of the reverse diffusion process, and train the model jointly using the SSL loss. Our experiments demonstrate the effectiveness of Crossway Diffusion in various simulated and real-world robot tasks, confirming its advantages over the standard diffusion-based policy. We demonstrate that such self-supervised reconstruction enables better representation for policy learning, especially when the demonstrations have different proficiencies. § INTRODUCTION Behavioral Cloning (BC) <cit.> is a supervised learning formulation for robot action policy learning. Given expert demonstration data consist of a sequence of state-action pairs, we train a model to predict the correct action vector given input states (e.g., images). This framework has shown to be very effective particularly when a sufficient amount of training data is provided <cit.>. Recently, sequence modeling approaches <cit.> have been often used for behavioral cloning, because of their ability to model multiple steps of information. In such a formulation, the objective is to model the probability distribution of the multi-step state-action trajectory. This allows BC to consider beyond a single-step regression, better taking advantage of history. Given the success in modeling human language <cit.> and images <cit.>, Transformers <cit.> have been popularly adopted for the sequence modeling-based policies <cit.>. Diffusion models <cit.>, an approach to model data distribution, also have been applied for sequential modeling recently <cit.>. Diffusion models have exceptional capabilities in modeling multimodal data distribution and generating new samples from the distribution, which make them suitable for imitating behaviors by generating trajectories. For visuomotor control tasks, <cit.> demonstrated promising performance on real-world tasks with high-dimensional multimodal controls, using visual observations as conditions of the diffusion model. In this work, we propose Crossway Diffusion, a simple yet effective method to improve diffusion-based visuomotor policy learning using an extra self-supervised learning (SSL) objective. Specifically, we introduce an SSL task that reconstructs the raw pixels and other state information from the intermediate representations of the reverse diffusion (denoising) process. This reconstruction task forces the model to focus on both the observation and action features. It also encourages the temporal correspondence between the latent representations. Through our experiments over multiple challenging tasks from different benchmarks, we verify the consistent advantage of Crossway Diffusion in various simulated and real-world robot tasks in comparison to the baseline Diffusion Policy. Especially, we achieve a ∼15% improvement over the baseline on Transport, mh dataset from Robomimic <cit.>, as well as in our real robot evaluation. Our contribution can be summarized as follow: * We propose Crossway Diffusion, a simple yet effective way to improve diffusion-based vasomotor policy learning using a self-supervised auxiliary loss during training. Compare to the baseline, Crossway Diffusion requires no additional computation during evaluation. * We confirm the effectiveness of the proposed method on multiple challenging visual BC tasks from different benchmarks and collect our real-world robot manipulation dataset. * We conduct detailed ablations on multiple design choices of Crossway Diffusion as well as one contrastive learning-based auxiliary loss. We observe that two variants of Crossway Diffusion also bring consistent improvement over the baseline, indicating the robustness of our proposed method. § PRELIMINARIES Behavioral Cloning We consider simple behavioral cloning (BC) setting over a Markov Decision Process (MDP), described by the tuple (𝒮, 𝒜, P), where s∈𝒮 represents the state, a∈𝒜 is the action, and P are the transition dynamics given by P(s'|s, a). A trajectory consists of a sequence of state-action pairs {s_0, a_0, s_1, a_1, …, s_T, a_T}. Our goal is to train a robot policy π that best recovers an unknown policy π^* using a demonstration dataset 𝒟 = { (s_i, a_i) } collected by π^*. Specifically, the robot policy π operates on a trajectory basis: π(A_t | S_t ), where S_t={s_t- T_s + 1, s_t - T_s + 2, ..., s_t} is the given short history state sequence, A_t={a_t, a_t + 1, ..., a_t + T_a-1} is the predicted future actions to take. T_s and T_a represent the lengths of these two sequences respectively. Diffusion Models Diffusion models <cit.> are generative models that iteratively generate samples that match the data distribution. The diffusion process is of the original data being destroyed by a sequence of noise q(x^k|x^k-1) known as the forward process, where k is the current diffusion step and there are K steps in total. The diffusion model uses the reverse process, the backward process, p_θ(x^k-1|x^k) to denoise the corrupted data. By iteratively denoising, a diffusion model generates synthesized data x̂ that approximates the original data distribution q(x_0), starting from a random prior p(x^K): p_θ(x^0) := ∫ p_θ(x^0:K)dx^1:K = ∫ p(x^K) ∏_k=1^Kp_θ(x^k-1|x^k)dx^1:K Typically the random prior p(x^K) is a standard Gaussian distribution, and the denoising process is parameterized by the following Gaussian: p_θ(x^k-1|x^k) = 𝒩(x^k-1|μ_θ(x^k,k),Σ^k), where the parameter μ_θ(x^k,k) is estimated from the model, usually implemented as a neural network parameterized by θ. Diffusion models can be conditioned by latent representations <cit.>, which in turn conditions the generated data distribution on the given representation: p_θ(x^k-1|x^k, h) = 𝒩(x^k-1|μ_θ(x^k, k, h),Σ^k). This conditioning technique is widely used in text-conditioned image generation, where text representations are conditions. We find that such conditioning is also useful to adapt a diffusion model to a visual robot policy. When using the visual state representations h(s) as conditions, the diffusion model essentially models p_θ(a|s) which is the desired form of an action policy. The details of this conditioning are introduced in Section <ref>. Diffusion-based Policy Diffusion models <cit.> have demonstrated the great ability in image generation and other data generation tasks. Recent works in robotics <cit.> formulate the robot policy learning as an action sequence generation problem, taking advantage of sequence modeling methods like Transformer and diffusion model. <cit.> concatenate the low-dimensional system states and the action at each time step as a vector and models multiple state-action pairs as a matrix (image) generation problem. However, such a configuration is not feasible for visuomotor policy learning due to the high dimensionality of the visual observations. Diffusion Policy <cit.> tackles the challenge by generating only action sequences using DDPM <cit.>, while the generation process is under the condition of visual observations. § METHOD Crossway Diffusion extends Diffusion Policy <cit.> by introducing (1) a state decoder for state reconstruction and (2) an auxiliary reconstruction loss to be jointly optimized with the diffusion loss during training. We follow the standard problem formulation that treats behavioral cloning as an action sequence generation problem conditioned on states. Specifically, given a sequence of T_s states S_t={s_t- T_s + 1, s_t - T_s + 2, ..., s_t} which contains both visual states and low-dimensional states, the diffusion model (action encoder and decoder) generates a sequence of T_a actions A_t={a_t, a_t + 1, ..., a_t + T_a-1} (see Figure <ref>). Note that the actual sequence length T_p generated by the diffusion model is usually larger than T_a, due to padding. After the agent finishes executing A_t, the diffusion model will generate the consequent action sequence given the current state sequence S_t + T_a, which formulates a closed-loop control. Architectural Overview The overall architecture of Crossway Diffusion is presented as Figure <ref>. We introduce a new state decoder module D_S for state reconstruction, extending the existing Diffusion Policy model composed of a state encoder  E_S, an action encoder E_A, and an action decoder D_A. The action encoder and decoder make up the diffusion model for generating action sequences by running the diffusion process iteratively. The state encoder provides conditioning from the states, which modulates the generation process. For any state sequence S_t, the state encoder, which is a modified ResNet18 <cit.> introduced by <cit.>, extracts a batch of visual embeddings h_t,img from every single image in S_t. The visual embeddings are then concatenated with other low dimensional states S_t, low-dim to format the observation condition. The process above is described as Equation <ref>: S_t ={ S_t,img, S_t, low-dim} h_t,img =E_S(S_t,img) h_t = h_t, img⊕ S_t, low-dim The action encoder E_A and decoder D_A are made of 1D conditional residual CNN blocks and down/upsampling modules similar to a UNet <cit.> in DDPM <cit.>. The action encoder E_A takes the noisy action sequence A_t^k at diffusion step k and the observation condition h_t and produces the representation X_t^k from the deepest layer and other tensors for skip connection X_t, skip^k from the shallower layers: X_t^k, X_t, skip^k =E_A(A_t^k, h_t, k) where t is the timestamp of a state or action trajectory (See Figure <ref>). X_t^k ∈ℝ^T × C where T is the representation length along the time axis and C is the number of channels. The action decoder takes both X_t^k, X_t, skip^k and condition h_t to estimate the noise applied to A_t^k in the forward diffusion process. The condition h_t is applied between two convolutional layers in the residual block, using Feature-wise Linear Modulation (FiLM) <cit.>. Then we derive a (slightly) denoised action sequence A_t^k-1 from the estimated noise ϵ_θ and A_t^k using Equation <ref>. ϵ_θ =D_A(X_t^k, X_t, skip^k, h_t) A_t^k-1 =1/√(α_k)( A_t^k - 1-α_k/√(1-α̅_̅k̅)ϵ_θ) + σ_kz where z is randomly sampled from a normal distribution and it has the same dimension as A_t^k. α_k, α̂_̂k̂ and σ_k are parameters regarding the diffusion process used in the original DDPM <cit.> paper, except that we use k instead of t as the diffusion step. During inference, the denoising process mentioned above is repeated for K times iteratively, generating a noiseless action sequence A_t^0 in the end through the repeated application of the action encoder and decoder. A_t^0 is the action sequence our method generates for robot control as shown in Figure <ref>. The state decoder D_S reconstructs the input states from a transformed intermediate representation Ŝ_t=D_S( g(X_t^k ) ), where Ŝ_t is the reconstructed states and g(·) is the intersection transformation, which we will discuss more in the following subsection and Section <ref>. The representation X_t^k is dubbed intersection since both the flow of action sequence denoising and the flow of state sequence reconstruction pass through this tensor and then head to their corresponding destinations. For the same reason, we name our method Crossway Diffusion, whose key feature is to have two flows above intersected. For each source of the state, we assign a dedicated decoder for the best reconstruction results. To reconstruct the visual states, the state decoder is made of 2D residual CNN blocks and upsampling similar to a 2D UNet decoder but without skip connections. While other low-dim states are processed by two-layer MLPs using the vanilla intersection tensor. Notice that the reconstruction with the state decoder D_S is only used during the training, serving as an `interpreter' to generate additional supervisory signals to train better intermediate representations. The reconstruction itself is not used during the inference. Intersection Transformation for Image Reconstruction We transform the intersection tensor X_t^k before sending it to the visual state decoder, bridging the dimensional gap between the action sequence embedding and image embeddings. As shown in the right of Figure <ref>, we first divide the intersection tensor X_t^k along the time axis and now it can be regarded as a list of vectors with a length of C. In the default setting of Crossway Diffusion, only the first vector is selected. The C elements of the first vector are equally split into 4 folds and then tiled as a C / 4 × 2 × 2 block B. The tile block B is repeated multiple times in two spatial dimensions so that it will have a spatial resolution of a quarter of the desired reconstructed image along each spatial dimension. Finally, we encode the 2D pixel location (normalized to [-1, 1 ]) using the same method from NeRF <cit.>, and the positional embedding is concatenated to the repeated B along the channel axis, which finishes the intersection transformation. The visual state decoder will reconstruct the image sequence from the concatenated tensor. We also investigate other design choices of the intersection transformation in Section <ref>. Crossway Diffusion Loss In addition to DDPM loss ℒ_DDPM used in Diffusion Policy <cit.>, the reconstruction task provides an auxiliary self-supervised loss ℒ_Recon. which is a Mean Squared Error (MSE) between the reconstructed states and the original input states. The reconstruction loss ℒ_Recon. is jointly optimized along with ℒ_DDPM by simple addition. The total loss for Crossway Diffusion is denoted as Equation <ref> where ϵ is the noise applied to A_t^0 for constructing A_t^k and we find α=0.1 is a generally good setting without extensive hyperparameter search. ℒ_DDPM = MSE(D_S(E_A(A_t^k, h_t, k), h_t), ϵ) ℒ_Recon. = MSE(S_t, Ŝ_t) ℒ_Crossway = ℒ_DDPM + αℒ_Recon. § EXPERIMENT We evaluate Crossway Diffusion on four challenging simulated tasks and one real-world task from multiple benchmarks. Furthermore, we explore two variants of Crossway Diffusion regarding intersection transformation and one direct extension of Crossway Diffusion taking advantage of contrastive learning. Through a series of conducted experiments, we confirm that Crossway Diffusion leads to better performance than the vanilla Diffusion Policy baseline <cit.> over all the tested tasks, especially when the demonstrations are varied in proficiency. We also show that our method is robust to unseen obstructions in real-world environment settings. §.§ Task and Dataset We follow <cit.> and choose three challenging tasks Square, Transport, and Tool Hang from Robomimic <cit.> and Push-T from Implicit Behaviour Cloning (IBC) <cit.>. In addition, we build a real-world robot arm manipulation environment and collect our own data for the task Duck Lift. In the `Square' task, the robot needs to fit the square nut onto the square peg. The `Transport' task entails the collaborative effort of two robot arms to transfer a hammer from a closed container situated on one table to a designated bin on another table. One arm is responsible for retrieving and passing the hammer, while the other arm manages the trash disposal and receives the passed hammer. In `Tool Hang', the objective is to complete the assembling of a frame, which comprises a base piece and a hook piece. The robot needs to insert the hook into the base and then hang a wrench on the hook. Additionally, the `Push-T' task involves pushing a T-shaped block (gray) onto a target location (green) in a 2D space. Regarding the expert proficiency of demonstrations, we investigate two datasets with the Transport task: `ph' - proficient-human demonstration and `mh' - multi-human demonstrations, defined by original Robomimic <cit.>. `mh' is designed to have lower proficiency on the task in average compared to `ph'. In the real-world task `Duck Lift', the objective is to pick up a rubber duck using a robot arm with a gripper and images from two cameras. The first camera is stationary and offers a third-person perspective of the operating space, while the second camera is mounted on the gripper, providing a first-person view for grasping. The action space encompasses both the 3D position of the robot arm and a binary signal for the gripper's opening and close. A visual reference for all these tasks is provided in Figure <ref>. Detailed information on all the datasets is summarized in Table <ref>. §.§ Main Results Evaluation Metrics Consistent with prior studies, we adopt the success rate of the following tasks, Square, Transport, Tool Hang, and Duck Lift, as a performance metric. In the case of the Push-T task, we measure the extent of target location coverage achieved by the T block, which is the ratio of the covered area to the total area. All the models are trained for 500 epochs and evaluated at the end of training. For all diffusion-based methods, we benchmark the exponential moving average (EMA) version of the model for better stability, as suggested by <cit.>. More training details are available in Appendix <ref>. Figure <ref> shows example episodes using Crossway Diffusion for each task. Simulated Experiments For all the simulated tasks, we report the average performance over 1000 randomly initialized episodes × 3 models trained with different random seeds in Table <ref>. That is, each score in Table <ref> is an average of 3000 experiments in total. From the comparison, the proposed Crossway Diffusion consistently outperforms the baseline Diffusion Policy <cit.> in all datasets. Especially, we observed a 15.7% improvement in Transport, mh, emphasizing the effectiveness of our method when the demonstration data is varied in proficiency. Please refer to Figure <ref> for the example episodes generated by our method. Real-world Experiments The results of Duck Lift are reported in Table <ref>. The success rate of picking up the duck is measured over 20 episodes. For each episode, we place the duck at a random initial position but keep the positions consistent across tested methods. The results show that our method achieves a higher success rate than the baseline Diffusion Policy <cit.>. We further validate the robustness of our method under various obstructions as shown in Figure <ref>. None of these obstructions are present in the training dataset. Figure <ref> (a)(b)(c) contain different unseen objects placed on the table as a distraction. In Figure <ref> (d), the duck is only visible to the second camera. Our method successfully executes the task in all the above scenarios indicating robustness to obstructions. Figure <ref> (e) shows a scenario where the duck is not clearly visible to both cameras. In this case, as expected our method fails to locate and lift the duck. Qualitative Results on Image Reconstruction In Figure <ref>, we show multiple pairs of original (left) and reconstructed (right) images randomly selected from the validation set. For simulated environments, Crossway Diffusion provides surprisingly well-reconstructed images given the abundance of data. For real-world environments, the reconstructions preserve most of the robot and duck structures, while failing on the details (mouth and eyes) of the duck. However, we note that bad reconstruction, especially on non-task-related details, does not necessarily mean poor performance, since the reconstruction task helps the learning process. More reconstruction examples are available in Appendix <ref>. §.§ Ablations On Intersection Transformation We investigate two more designs of the intersection transformation displayed in Figure <ref> over all five simulated datasets. Specifically, we are interested in which part of the intersection X_t^k benefits policy learning most while leaving other operations untouched. As shown in Figure <ref>, Design A is the default Crossway Diffusion introduced in Section <ref>. In Design B, the first C / 2 channels for all vectors in intersection X_t^k are selected for reconstruction, while the rest is left dedicated to the denoising process. In contrast, Design C takes advantage of all vectors in X_t^k for the reconstruction. Additionally, the vector is independently projected by a linear layer to match the target number of channels C / 4 in both Design B and C. The latter operations like reshape and repeat in Figure <ref> are kept the same. The results presented in Table <ref> show that all the design variants of intersection transformation consistently outperform the baseline Diffusion Policy. Such observations validate the effectiveness and robustness of introducing the auxiliary reconstruction objective. Though different designs show diverse advantages over different tasks, we choose Design A as the default due to its computational simplicity. On Auxiliary SSL Task In default Crossway Diffusion, both images and low-dimensional states are reconstructed from the intersection. We first benchmark a simple variant of Crossway Diffusion called Crossway-Visual on Push-T, which reconstructs only the image states rather than all the states. Furthermore, we verify the effectiveness of the reconstruction auxiliary task, in comparison to another self-supervised learning approach using contrastive learning inspired by CURL <cit.>. In particular, we independently perform random crop on all images of an observation sequence S_t twice to get two augmented sequences S_t,a1 and S_t,a2. The model takes S_t,a1 to produce intersection X_t,a1^k while S_t,a2 is processed by the exponential moving average (EMA) version of the model to get X_t,a2,ema^k. The intuition is that due to the semantic similarity between S_t,a1 and S_t,a2, the intersection X_t,a1 and X_t,a2,ema should also be similar in a latent space. We follow CURL <cit.> and maintain a learnable matrix W. For each batch of samples, we calculate the similarity matrix M_sim between all samples in the same batch using matrix multiplication (first line in Equation <ref>). Then the contrastive loss ℒ_CURL is formulated as maximizing the similarity between the same sample while minimizing the similarity between different samples, which is equivalent to a cross-entropy loss. Finally the contrastive loss ℒ_CURL is jointly optimized with the diffusion loss ℒ_DDPM similar to Equation <ref>. The loss computation is presented as Equation <ref>, where sg(·) means stop gradients, b is the batch size and we set α=0.1 same as the default setting. We name such configuration as Crossway-CL. M_sim = X_t,a1^k · W ·sg(X_t,a2,ema^k)^T ℒ_CURL = CrossEntropyLoss(M_sim, range(b) ) ℒ_Crossway-CL = ℒ_DDPM + αℒ_CURL The results are shown in Figure <ref>. From Figure <ref>, we observe that the variant which only reconstructs visual observations (Crossway-Visual) achieves performance that lies between Crossway Diffusion and the baseline. This finding confirms the significance of reconstructing both images and low-dimensional states. The contrastive learning variant Crossway-CL yields much worse performance compared to the baseline, indicating that not all self-supervised auxiliary losses benefit policy learning. Such observations happen to align with the online reinforcement learning case <cit.>. 0.50 figureReward distribution on Push-T 0.35 tableAblation studies on Push-T. ! Method Push-T Crossway-A (Default) 0.863 Crossway-B 0.888 Crossway-C 0.841 Contrastive 0.362 Diffusion Policy 0.815 0.30 tableAblation studies on all two simulated environments between Crossway-A and Crossway-C. ! Method Push-T ToolHang Crossway-A 0.863 0.784 Crossway-C 0.841 0.813 § RELATED WORK Behavioral Cloning Behavioral Cloning (BC) <cit.> is a straightforward but surprisingly effective way to obtain robot policies due to its simplicity compared to RL. With pre-collected state-action pairs, BC learns a policy like fitting a dataset <cit.>, with additional techniques like reward labeling / IRL <cit.>, distribution matching <cit.>, and incorporating extra information <cit.>. Apart from explicitly generating output actions, BC can be done implicitly, where an energy-based model is learned to model the action distribution <cit.>. Implicit BC is found to be effective in real-world robot tasks. BC also boosts some online RL algorithms like TD3+BC <cit.>, DeepMimic <cit.>, and more <cit.>. Recent Diffusion-model based BC is more like an advanced approach for matching the behavior distribution, which potentially helps mitigate the distribution shift problem in BC <cit.>. Sequential Modeling for Offline-RL and Imitation Learning Sequential modeling <cit.> is the recent direction to solve offline-RL or Imitation Learning problems. The key is to optimize a policy on a trajectory basis from pre-collected experiences, with a reward signal (offline-RL) or without it (imitation learning). To model the trajectory composed by state-action-reward tuples, Transformer <cit.> is firstly adopted to this problem in the light of success in modeling natural language. In this formulation, state-action-reward tuples are regarded as equal units <cit.> or with Markovian properties <cit.> for better long-term modeling. There are works to extend the formulation to online learning <cit.>, hindsight matching <cit.>, and bootstrapping <cit.>. Recently, Diffusion-based models <cit.> are also adopted to this problem <cit.> and showing promising results on robot tasks <cit.>, and on combining with RL objectives <cit.>. Orthogonal to robot policy learning, diffusion models can be also applied to data augmentation and synthesis for scaling <cit.>, or to robot planning using text-conditioned video generation model <cit.>. Self-supervised learning Self-supervised learning (SSL) <cit.> is used to learn data representations without task labels. SSL is commonly used to (pre-)train task-agnostic foundation models <cit.>, or is used as an auxiliary task together with other learning paradigms. Similarly, SSL has multiple ways to combine with policy learning, and we briefly categorize them into two: pre-training with SSL <cit.>, and policy learning jointly with auxiliary SSL tasks <cit.>. Studies <cit.> have shown that different ways to combine policy learning with SSL have different outcomes. In this work, we follow the joint learning style that optimizes diffusion and reconstruction objectives together. § LIMITATION Even though our self-supervised learning objective improves the performance, it remains the same need for a large number of diffusion iterations during inference as the baseline Diffusion Policy <cit.>. This is still a common challenge for diffusion-based policies that hinders responsiveness in some high-dynamic real-world environments. To this end, we use DDIM <cit.> to accelerate the inference process as suggested in <cit.>. § CONCLUSION In this paper, we investigate how SSL can be used to improve diffusion-based visual behavioral cloning. We propose Crossway Diffusion, which involves an extra reconstruction auxiliary objective in addition to the existing diffusion objective during training. Compare to the baseline, Crossway Diffusion requires no additional computation during evaluation and shows consistent improvements over multiple challenging tasks including one real-world task. Especially, we observe a significant performance boost when the demonstrations are varied in proficiency. We hope our work inspires further exploration of diffusion-based policies and how to take advantage of the rapidly evolving SSL techniques. § APPENDIX §.§ Training details §.§.§ Data preprocessing The configurations are in accordance with <cit.>. To scale action values within the range of [-1, 1], we employ min and max normalization. However, the action dimension related to rotation is not subjected to normalization. In environments with velocity control action space, we utilize the 3D axis-angle representation mentioned in <cit.> for rotation. On the other hand, for environments with positional action space, we adopt the 6D rotation representation proposed in <cit.>. During training, we apply random crop augmentation, whereas, during inference, we use center crop. §.§.§ Model Architecture As introduced in Section <ref>, Crossway Diffusion composes of four modules, state encoder, state decoder, action encoder, and action decoder. We follow the same state encoder, action encoder, and action decoder design reported in Diffusion Policy <cit.>. Each source of the state has its own state decoder for the reconstruction task and these state decoders can be categorized into two types based on the type of the state: visual state decoder and low-dimensional state decoder. The architecture of the visual state decoder is shown in Figure <ref>. The numbers in the blocks indicate the number of output channels except for ConvTranspose. ConvTranspose doubles the spatial resolution of the input tensor while keeping the number of channels identical. The low-dimensional state decoder is a three-layer MLP. The widths of the hidden layers are in the ratios of 4:2:1 compared to the width of the low-dimensional states. §.§.§ Hyperparameters We align most of the hyperparameters with Diffusion Policy <cit.>. The observation horizon, action horizon, and action prediction horizon are set to 2, 8, and 16 respectively. The learning rate and weight decay are set to 1e-4 and 1e-6 respectively. For both training and inference, we employ 100 diffusion iterations. Detailed hyperparameters regarding image reconstruction and the numbers of learnable parameters are reported in Table <ref>. §.§ More Reconstruction examples In Figure <ref>, we provide additional examples of image reconstruction by visual state decoder of Crossway Diffusion. §.§ Code and dataset The code and the real-world dataset Duck Lift will be publicly available soon.
http://arxiv.org/abs/2307.10183v1
20230702115604
Contextual Beamforming: Exploiting Location and AI for Enhanced Wireless Telecommunication Performance
[ "Jaspreet Kaur", "Satyam Bhatti", "Olaoluwa R Popoola", "Muhammad Ali Imran", "Rami Ghannam", "Qammer H Abbasi", "Hasan T Abbas" ]
cs.IT
[ "cs.IT", "cs.SY", "eess.SY", "math.IT" ]
Contextual Beamforming: Exploiting Location and AI for Enhanced Wireless Telecommunication Performance Jaspreet Kaur, Satyam Bhatti*, Olaoluwa R Popoola, Muhammad Ali Imran, Rami Ghannam, Qammer H Abbasi and Hasan T Abbas James Watt School of Engineering, University of Glasgow, United Kingdom Email: {j.kaur.1, s.bhatti.2*}@research.gla.ac.uk {Olaoluwa.Popoola, Muhammad.Imran, Rami.Ghannam, Qammer.Abbasi and Hasan.Abbas}@glasgow.ac.uk August 1, 2023 ======================================================================================================================================================================================================================================================================================================================================================== The pervasive nature of wireless telecommunication has made it the foundation for mainstream technologies like automation, smart vehicles, virtual reality, and unmanned aerial vehicles. As these technologies experience widespread adoption in our daily lives, ensuring the reliable performance of cellular networks in mobile scenarios has become a paramount challenge. Beamforming, an integral component of modern mobile networks, enables spatial selectivity and improves network quality. However, many beamforming techniques are iterative, introducing unwanted latency to the system. In recent times, there has been a growing interest in leveraging mobile users' location information to expedite beamforming processes. This paper explores the concept of contextual beamforming, discussing its advantages, disadvantages and implications. Notably, the study presents an impressive 53% improvement in signal-to-noise ratio (SNR) by implementing the adaptive beamforming (MRT) algorithm compared to scenarios without beamforming. It further elucidates how MRT contributes to contextual beamforming. The importance of localization in implementing contextual beamforming is also examined. Additionally, the paper delves into the use of artificial intelligence schemes, including machine learning and deep learning, in implementing contextual beamforming techniques that leverage user location information. Based on the comprehensive review, the results suggest that the combination of MRT and Zero forcing (ZF) techniques, alongside deep neural networks (DNN) employing Bayesian Optimization (BO), represents the most promising approach for contextual beamforming. Furthermore, the study discusses the future potential of programmable switches, such as Tofino, in enabling location-aware beamforming. Beamforming Classification, Adaptive and Contextual Beamforming, Machine and Deep Learning, Wireless Communication Networks, Localisation § INTRODUCTION Every subsequent generation of cellular communication has brought advancements in data speeds and capabilities, with each generation offering significant improvements over its predecessor <cit.>. The first-generation (1G) introduced the concept of cell phones, while the second-generation (2G) enabled text messaging services. The advent of the third-generation (3G) brought about internet streaming capabilities, and the fourth-generation (4G) revolutionised the mobile landscape with broadband internet coverage. However, as user demands continue to escalate rapidly, 4G networks have reached their capacity limits, necessitating the need for more data to cater to the growing number of smartphones and smart devices. The arrival of fifth-generation (5G) cellular technology promises to address these challenges by providing networks capable of carrying significantly higher traffic volumes than currently available networks<cit.>. 5G networks are expected to evolve at a rate ten times faster than the long-term development of 4G (LTE), these have the potential to facilitate the development of technologies such as augmented reality (AR), autonomous vehicles, the internet of things (IoT) <cit.>. At the core of 5F technology are five key advancements: full-duplex, massive multi-input multi-output (MIMO), millimetre waves (mmWaves), smart cell, and beamforming (BF). While smartphones and electronic devices operate within radio frequency (RF) frequencies that are typically less than 6 GHz <cit.>. This range is becoming increasingly congested due to the proliferation of communication technologies and multiple mobile carriers. The limited RF spectrum available in the industrial, scientific and medical (ISM) band poses challenges for accommodating the growing demand for data transmission, resulting in slower services and more frequent lost connections<cit.>. To address this issue, researchers have explored higher frequency bands ranging from 30 to 300 GHz <cit.>. Although millimetre waves(mm-wave) have been utilized in satellite communication for some time, their use in mobile communications is a recent development. While offering a wider frequency spectrum, mm-waves face a major challenge due to their limited ability to penetrate obstacles such as infrastructure. This characteristic leads to signal loss or absorption when mm-waves encounter environmental obstacles <cit.>. Subsequently, smart cell networks provide a solution to overcome this problem. Unlike traditional cellular connections relying on large high-power cell towers that transmit signals over long distances, smart cell networks leverage thousands of small low-power access points (APs) <cit.>. These APs are strategically placed in close proximity and grouped spatially to relay signals around obstructions. By eliminating the reliance on line of sight (LOS), smart cell networks ensure uninterrupted cellular service, even when users move behind obstacles. When a user equipment (UE) travels behind an obstruction, it seamlessly switches to a new AP, maintaining a consistent connection <cit.>. Another significant advancement in 5G technology is the use of Massive MIMO, which involves deploying a higher number of antennas compared to traditional MIMO systems. Massive MIMO leverages beamforming techniques to direct wireless signals towards their intended receivers and enables spatial multiplexing of multiple data streams over the same frequency band. Despite its drawbacks, Massive MIMO can significantly enhance communication performance and multiply the capacity of a mobile ad-hoc network by a factor of 22 or more<cit.>. In a time-division multiplexing system, user equipment (UE) needs to alternate between transmitting and receiving, which can introduce delays and reduce communication efficiency. In traditional cellular base stations (BS), antennas can only broadcast or receive signals at a given time. Multiplexing can improve performance, but transmit and receive signals are typically propagated at different frequencies. Conventional cellular antennas broadcast signals in all directions simultaneously, leading to potential interference <cit.>. Figure <ref> provides an illustration of the beamforming process (broadcasting signals in a specific direction) in rural, semi-urban, urban, and highway areas. Beamforming, on the other hand, offers several advantages in cellular communication. It enables more reliable and faster data transmission by establishing a more direct connection between transmitters and receivers. Beamforming has become an essential technology in various applications, including the 5G standard for cellular networks and radar-detection systems. However, implementing beamforming requires significant processing resources, which can pose challenges related to cost, hardware, and energy consumption. In the past, radar systems relied on mechanically moving and steering antennas to direct signals. However, advancements in signal processing techniques have made it possible to manipulate radio waves and focus them towards specific locations using electromagnetic beams. This eliminates the need for physical movements and reduces dependence on the physical structure of antennas. The development of antenna systems for 5G networks must meet the requirements of compact size and low power consumption. To enhance spectrum efficiency and throughput, antenna arrays with larger dimensions, such as 64 x 64 MIMO and beyond, are being utilized. However, the accuracy of these antenna arrays significantly affects the performance of beamforming. As wavelengths decrease, component sizes, including RF transceivers with features like ADCs, also decrease. Exploring new materials, such as 40 nM CMOS, is helping to reduce the size and power consumption of essential components in 5G networks. Traditional RF power amplifiers made with materials like GaAs and other III-V semiconductors are not power-efficient and do not integrate well with other capabilities. This is where advancements in 40 nM CMOS technology can play a role in further reducing the size and power consumption of these critical components. Moreover, as the number of beams created by individual gNBs increases, more advanced signal processing techniques, including neural network methods, are required. This pushes power budgets and space restrictions even further. Despite these challenges, beamforming holds a promising future in various application areas. Contextual beamforming is a promising technique for enhancing the performance of 5G communication systems. It enables the use of millimeter wave frequencies and massive MIMO technologies to achieve high data rates and low latency. Contextual beamforming adapts beamforming parameters in real-time based on environmental conditions and user requirements. This is achieved through feedback from the network and user devices, as well as the utilization of machine learning algorithms to optimize the beamforming process. Contextual Beamforming has applications in mobile edge computing (MEC), where low-latency computing and networking services are provided to mobile users. By dynamically adjusting the beamforming parameters based on the location, movement, and traffic conditions of the users, the quality of the wireless links between the user devices and the MEC servers can be improved. In VR/AR, contextual beamforming can improve the quality of audio and video streams used in VR/AR applications by selectively enhancing relevant signals and suppressing irrelevant or distracting ones. To optimize the beamforming process and make it more efficient, AI is used in 5G technology. AI algorithms analyze real-time data from the network to determine the optimal beamforming pattern based on various factors, such as the user's location and behaviour, and the surrounding environment. Additionally, AI can help with beamforming in scenarios involving multiple users, where each user requires a different beamforming pattern. By dynamically adjusting the beamforming patterns for each user, optimal signal strength can be ensured. In complex scenarios involving multiple users and dynamic environments, AI is a crucial tool for optimizing the beamforming process in 5G networks. §.§ Contribution to the Literature In our review paper, we analyzed the role of Beamforming in the field of mobile communication. We conducted a review of the applications of beamforming in the algorithms, antenna fabrication, and the discovery of new AI (artificial intelligence) approaches. Our article is an effort to provide a review of contextual beamforming. The following are the major contributions of this article: * We shortlisted research articles related to beamforming techniques that can help in context-aware beamforming. * We review the literature on beamforming types both standard and using ML and AI techniques. * Various ML techniques facilitated the estimation of user location and beam management in the study. * We investigated the techniques used for the optimization of beamforming with the help of ML. * We highlighted the challenges associated with using ML techniques for contextual beamforming. §.§ State of the Art The field of contextual beamforming in 5G technology is in a constant state of evolution, with ongoing research and development efforts aimed at improving its efficiency and effectiveness. In recent years, a surge in the use of conventional, machine learning (ML), and artificial intelligence (AI) techniques for beamforming has been observed. Several recent advances have been made in this area. By considering location-unaware systems with benchmarking techniques <cit.>, a location-aware system can be developed and make location estimation more accurate. Additionally, opportunistic beamforming is used as feedback to smart antennas using channel delay information<cit.>. In <cit.>, the authors propose a recursive matrix shrinkage method to estimate the interference-plus-noise covariance matrix along with the desired signal steering vector mismatch. A two-stage design approach was utilized in <cit.>, with the first stage dealing with beamforming, and the second with adaptive power allocation and modulation. Another recent study by <cit.> proposed a novel and general approach to deriving the statistical distribution of the signal-to-noise ratio (SNR) by exploiting the array structure, beamforming type, and slow fading channel coefficients. This approach was used to design power and modulation adaptation strategies. <cit.> presented the scheme that uses coordinated beam search from a small beam dataset within the error offset, and then the selected beams are used to guide the search for beam prediction. Additionally, <cit.> proposed an end-to-end deep learning technique to design a structured compressed sensing (CS) matrix that is well-suited to the underlying channel distribution. This technique leverages sparsity and the spatial structure that appears in vehicular channels. In contrast, <cit.> noted that current millimetre-wave (mmWave) beam training and channel estimation techniques do not typically make use of prior beam training or channel estimation observations. Moreover, <cit.> presented that determining the optimal beamforming vectors in large antenna array mmWave systems necessitates significant training overhead, which can have a significant impact on the efficiency of these mobile systems. However, a significant drawback in these studies was that limited ML approaches were discussed, and the optimization scope and implementation of proposed beamforming algorithms were not considered in real environments. As various ML techniques have been adopted for BF, this paper aims to provide a detailed review of different ML-based BF techniques. These ML techniques include the procedure to preprocess the input data and various ML algorithms in any environment. Our systematic review goes beyond existing literature, showcasing how various ML techniques can be used to screen large numbers of beamforming approaches for potential location estimation applications and to optimize the approaches using high computational power. Accordingly, the following sections will describe the in-depth analysis of currently popular beamforming techniques and how AI can improve their overall performance by mitigating their limitations. §.§ Organisation of the Review The rest of the paper is organized depending on finding the gap from the state-of-the-art in the recent literature review and is described as follows. The adopted methodology is presented in section 2 of the systematic review, followed by the results and analysis of the systematic review are discussed in section 3. In addition, section 4 elaborates on the challenges associated with using the recent AI, ML, and DL techniques for contextual beamforming. Moreover, section 5 incorporates the potential applications of contextual beamforming. Subsequently, section 6 discusses our key contributions towards the development of Localisation and Beamforming. Lastly, section 7 of the systematic review concludes the article. § REVIEW METHODOLOGY This section of the systematic review presents our review methodology depending on the defined research objectives and questions that were used for shortlisting the relevant research articles on ML algorithms for contextual beamforming techniques. §.§ Research Objectives The four key objectives of our systematic review article are: RO1: To review the range of Adaptive beamforming and ML-based beamforming using priori user data. RO2: To identify the ML techniques used specifically for Contextual beamforming. RO3: From a practical perspective, identify the specific ML and optimization techniques used for real-time implementation. RO4: To identify ML algorithms specifically used for the beamforming for low latency, high throughput and SINR. §.§ Research Questions Our systematic review aims to answer the following four research questions: RQ1: What are the various location-assisted BF techniques? RQ2: What are the datasets required for classifying the Contextual Beamforming? RQ3: How can the Contextual Beamforming models be optimized for real-time processing? RQ4: What are the different types of ML and AI techniques used with respect to Contextual Beamforming? §.§ Review Protocol For structuring our systematic review, we instigated a review protocol, and the following are the perquisites of the adopted analogy. In this section, we discuss the search strategy, inclusion criteria, exclusion criteria, and screening mechanisms for selecting relevant research papers. §.§.§ Search Strategy The most recent research papers from renowned publishing houses like IET, Science Direct, Nature, AIP, Wiley, IEEE Explorer, IoP science, ACS publications, and MDPI were taken into account during our review. We also included unreviewed papers from arXiv in our search. As a result, we evaluated and critically analysed the grey literature (research and publications produced by organisations not usually linked with academic or commercial publishing organisations) using the AACODS (Authority, Accuracy, Coverage, Objectivity, Date, Significance) criteria. We commence by queering every repository that contains various study items. To compile our study articles, we defined keywords like "Machine Learning," "Deep Learning," "Beamforming," "location," "context information," "5G," and "Vehicular Communication." Based on the article's title and abstract as well as a full-text read of the papers, articles were scanned. To link these keywords, we also created search strings using the Boolean operators AND and OR. §.§.§ Inclusion Criteria The following are the parameters used in the inclusion criteria. * We included only English-language articles involving the data-driven approaches of beamforming using conventional and ML techniques and were pertinent to the study issues such as poor data quantity and data quality. * We included the pertinent articles facilitating the discovery of only low-latency beamforming algorithms using ML methods before determining their eligibility. * We included comparative studies involving the optimization and robustness of beamforming techniques designed from ML services. * We targeted only articles that discussed ML for beamforming, location information, and publications on ML integration on contextual beamforming. §.§.§ Exclusion Criteria The following is a list of the exclusion criteria for shortlisting the research papers based on our research objectives and targeted research questions. * English-language research articles released in other languages. * Research papers without a complete text version. * Editorials, review articles of surveys, abstracts, and short papers concerning secondary studies are not accepted. * Articles that didn't discuss how to combine ML techniques with beamforming. §.§.§ Screening Phase Articles were further screened in two phases. In the first phase, we examined the title and the abstract of each research article to check whether they satisfied our inclusion criteria. In the second phase, we further shortlisted our articles based on their full text. It is worth mentioning that the same piece of writing frequently appeared in various publications. For example, conference papers frequently appear in journals. We take into account the original writing each item was reviewed throughout the screening stage two. At least two of the contributors of this paper who were entrusted with classifying the items as either pertinent or not pertinent might require more research, as finalized until any such item is either published or the authors have a discussion tagged as relevant or not. Survey and review papers were excluded from our review. Finally, each article was carefully classified and evaluated thematically. § RESULTS AND ANALYSIS OF THE REVIEW This section of the review paper summarizes the research articles that are shortlisted using the defined research objectives as well as aims to answer the predefined research questions. §.§ What are the recent BF techniques [RQ:1] With an extensive utilization of GPS coordinates worldwide, various location-assisted BF techniques are becoming exponentially important. Contextual beamforming and adaptive beamforming are two techniques used in signal processing to improve the quality of transmitted or received signals. Contextual beamforming refers to using prior knowledge about the environment to design a beamforming algorithm that optimizes the signal quality in that specific environment. This prior knowledge can include information about the location and number of signal sources, the acoustic properties of the environment, and other factors that can affect the quality of the received signal. On the other hand, adaptive beamforming refers to using real-time feedback from the received signal to continuously adjust the beamforming algorithm to improve the quality of the signal. This is particularly useful in dynamic environments where the signal sources or environmental conditions may change over time. Contextual beamforming is based on prior knowledge of the environment, while adaptive beamforming uses real-time feedback to continuously adjust the beamforming algorithm. Both techniques have their strengths and weaknesses. §.§.§ Adaptive Beamforming An adaptive beamformer is a tool for performing adaptive spatial signal processing using an array of transmitters or receivers. The signals are integrated in such a way that the signal intensity to and from a specific direction is increased. Signals from and to other directions are combined constructively or destructively, resulting in the degradation of the signal from and to the undesired direction. This method is utilised in both RF and acoustic arrays to achieve directional sensitivity without physically changing the receivers or transmitters <cit.>. Adaptive BF was first developed in the 1960s for military sonar and radar applications. There are various modern applications for BF, with commercial wireless networks such as long-term evolution (LTE) being one of the most popular. Adaptive BF's first applications in the military were primarily focused on radar and electronic countermeasures to counteract the effects of signal jamming. In phased array radars, BF can be seen. These radar applications use either static or dynamic/scanning BF; however, they are not truly adaptive. Adaptive BF is used in commercial wireless standards such as 3GPP LTE and IEEE 802.16 WiMAX to enable important services within each standard <cit.>. The concepts of wave transmission and phase relations are used in an adaptive BF system. A greater or lower amplitude wave is formed, for example, by delaying and balancing the received signal, using the concepts of superimposing waves. The adaptive BF system is adaptive in real-time to maximize or minimize desirable parameters, including the signal-to-interference noise ratio (SINR). There are numerous approaches to BF design, the first of which was achieved by Applebaum in 1965 by increasing the signal-to-noise ratio (SNR) <cit.>. This method adjusts the system parameters to maximize the power of the received signal while reducing noise (jamming or interference). Widrow's least mean squares (LMS) error method and Capon's maximum likelihood method (MLM) introduced in 1969 are two further approaches. The Applebaum and Widrow algorithms are quite similar in that they both converge on the best option. However, these strategies have difficulties in terms of implementation. Reed demonstrated a technique called sample matrix inversion (SMI) in 1974 <cit.>. Unlike Applebaum and Widrow's approach, SMI determines the adaptive antenna weights directly <cit.>. The Weiner solution can be used to create statistically optimal weight vectors for adaptive BF in data-independent BF design methods. On the other hand, the asymptotic 2^nd order statistics of SINR were assumed. Statistics fluctuate over time in cellular networks where the target is mobile and interferes with the cell area. An iterative update of weights is required to follow a mobile user in a time-varying signal propagation environment <cit.>. This enables the spatial filtering beam to adjust to the time-varying DOA of the target mobile user and to provide the desired signal to the user. To address the challenge of statistics (which can vary over time), adaptive algorithms that adapt to changing environments are frequently used to determine weight vectors. The functional block diagram of an adaptive array of n elements includes an antenna array of n elements and a digital signal processor with a feedback and/or control loop algorithm. The signal processing unit receives the data stream gathered by an array and computes the weight vector using a specific control method. On the contrary, the adaptive antenna array is divided into two categories: a) steady-state and b) transient state. These two categories are determined according to the array weights of stationary environment and time-varying environment. If the reference signal for the adaptive method is known from prior information, the system can update the weights adaptively through feedback <cit.>. To change the weights of the time-varying environment at every instance, several adaptive algorithms (mentioned in the further section) can be utilized. Figure <ref> shows the block diagram for adaptive BF which consist of a digital signal processor (DSP), RF chain, splitter, and N-phase shifter followed by antenna assembly along with an adaptive system providing feedback to shifters. §.§.§ Contextual Beamforming The ability to forecast the next location of the receiver which is based on tracking previous movements can be useful for creating intelligent applications like automobiles, robotics, augmented/virtual reality etc. The advancement of location prediction apps and services is enabled by the growth of methodologies for predicting and projecting the receiver's position in the future <cit.>. A wireless system, in general, controls a location-predicting framework by capturing and communicating critical data before application. The sender must be able to determine the receiver's location at any given time to interact effectively with them. Machine learning (ML) methods have already been used to predict the receiver's location. Context is created by recording, processing, and transcribing the receiver's status data at a certain time. Several machine learning algorithms, such as Deep Neural Networks, Convolution Neural Networks, Generative Adversarial Networks, and others, have been recognised as aiding in the technological advancement of location forecasting. Furthermore, depending on the application, machine learning algorithms can be modified and customised to match their objectives <cit.>. The majority of the existing mmWave beam tracking research focuses on communication-only protocols. The unusual beam tracking technique requires the transmitter to send information to the receiver, which then determines the angular position and sends it back to the transmitter. It is worth noting that in high-mobility communication circumstances, such as the one depicted in Figure <ref>, it is not enough to merely track the beam. To meet the crucial latency requirement, the transmitter should be capable of predicting the beam <cit.>. The state prediction and tracking designs in Figure <ref> is based on the classic Kalman filtering process. §.§.§ Location-assisted Beamforming The a priori information on the location of the user can enable the system to work more efficiently. The sorting of the prior information can reduce energy footprints. As an example, the branch predictor <cit.> in computer architectures can improve the flow in the instruction pipeline to achieve highly effective performance. In the case of location-aided or location-aware BF, a similar concept has been seen. Figure <ref> shows the block diagram for predictive or location-assisted BF which consist of a digital signal processor (DSP), RF chain, splitter, and N-phase shifter followed by antenna assembly along with a feedback loop providing current target user location to shifters. Line of sight (LOS) communication in mmWave transmission systems provides multi-gigabit data transmission with BF toward the user direction to mitigate the substantial propagation loss. However, abrupt performance degradation caused by human obstruction remains a major issue, thus using possible reflected pathways when blocking occurs should be considered <cit.>. In this line, location is a critical factor for Contextual beamforming in 5G networks because 5G relies on higher frequencies and smaller cells than previous generations of wireless networks. These higher frequencies have shorter wavelengths, which means that the signal is more easily obstructed by obstacles such as buildings, trees, and other objects. As a result, the location of the user and the position of the cell tower are crucial for ensuring reliable and efficient communication. Contextual beamforming in 5G networks involves directing the transmission and reception of signals towards the user's location, which can significantly improve signal quality and reduce interference from other sources. This can be done using beamforming techniques, which use phased array antennas to focus the signal in a specific direction towards the user. By adjusting the direction and shape of the beam, beamforming can improve signal strength and quality, reduce interference, and increase the capacity of the network. In addition to beamforming, 5G networks also rely on other location-based technologies, such as geolocation and network slicing. Geolocation can be used to determine the user's location and provide location-specific services, while network slicing allows for the creation of virtualized networks with different performance characteristics for different locations and applications. The usage of location-aware beamforming (BF) and interference mitigation techniques in ultra-dense 5G networks composed of densely scattered access nodes (AN) has been investigated in the literature. The development of user environment area networks (UEAN) with short distances in a packed environment results in higher levels of signal interference, but network densification enhances the chance of line-of-sight (LoS) and, as a result, leads to more accurate UE placement. This enables the use of spatial dimensions by BF and interference reduction. The accuracy of radio network positioning systems currently available is inferior to that of fibre optic communication systems in radar stations and atomic clock-based satellite navigation systems. Future 5G networks are expected to provide positioning accuracy on the order of one meter. Lu et al <cit.> proposed approaches such as weighted centroid geometric (WCG) and a joint positioning and tracking framework based on the extended Kalman filter (EKF) to achieve accurate and reliable 3D positioning for industrial IoT systems where anchor locations are not precisely known. They also suggested position-aided beamforming (PA-BF) approach that outperforms conventional BF in terms of initial access latency and spectral efficiency, especially for UE moving at a speed greater than 0.6 m/s. Sellami et al <cit.> proposed a neighbour-aided localization algorithm for outdoor UEs operating in challenging channel conditions. The algorithm selects two neighbours based on reference signal power measurement, and the BS performs beamforming over an angular interval determined by the calculated distance and angle of arrival (AoA) of the first neighbour to discover two candidates for the UE post. <cit.> provided location assistance (LA) direction of departure (DOD)-based beamforming technique that is appropriate for wireless communication in high-speed rail (HSR). The algorithm's goal is to modify the phase at the transmitter to increase the output signal-to-noise ratio (SNR) at the receiver. Both the performance of ideal DOD beamforming and approximated DOD with location error-related variation are assessed. The study described in <cit.> suggests LAMA (Location-Assisted Medium Access), a Medium Access Control (MAC) protocol based on locally shared position data for position awareness beaconing. Their contention-free method manages to effectively minimize interference, especially hidden-terminal type, through coordinated spatial reuse and scales effectively with high neighbour numbers. In <cit.>, the authors implemented location-aware beamforming and interference mitigation techniques in 5G ultra-dense radio networks to improve the use of space. They also estimated the positioning accuracy limitations of the user equipment using the direction of arrival measurement processing in three-dimensional space with metrics of the Cramer-Rao lower bound ellipsoid. Similarly, <cit.> proposed a location-aware beamforming design for the RIS-aided millimetre-wave (mmWave) communication system without the channel estimation process, which took into account the limitations of the conventional channel state information (CSI) acquisition techniques for the RIS-aided communication system. They also created a worst-case robust beamforming optimization problem to counteract the impact of location inaccuracy on the beamforming design. In <cit.>, a spatial estimation approach based on the theory of observers in the control systems literature was proposed to handle the essential issue of beamforming in UAV-based communications. They effectively anticipated the positions of the target UAVs in the presence of uncertainty by using a delay-tolerant observer-based predictor. The method functioned consistently in the presence of channel blockage and interference. The likelihood of positioning-aided beamforming systems experiencing an outage was investigated in <cit.>. The authors took into consideration positioning error, link distance, and beamwidth to generate closed-form outage probability constraints. They demonstrated that the beamwidth should be maximized with the transmit power and connection distance to reduce the likelihood of an outage. In <cit.>, a deep learning-based location-aware predictive beamforming technique was proposed to follow the beam for UAV communications in a dynamic environment. They developed a long short-term memory (LSTM)-based recurrent neural network (LRNet) to predict the UAV's expected location, which could be used to calculate a forecast angle between the UAV and the base station for efficient and quick beam alignment. In a multi-cell, multiple input, multiple outputs (MIMO) communication system aided by optical positioning, <cit.> suggested a location-based energy-efficient optimization approach for the beamforming matrix. They increased the system's achievable ergodic rate by estimating the channel coefficient matrix based on the location data. In <cit.>, position-aided beamforming (PABF) architecture was proposed for improved downlink communications in a cloud-oriented mmWave mobile network. The authors demonstrated that the proposed PABF outperformed the traditional codebook-based beamforming in terms of effective transmit ratio and initial access latency, demonstrating its potential to accommodate high-velocity mobile users. Finally, <cit.> proposed an effective beam alignment solution for mmWave band communications by utilizing the mobile user's location data and potential reflectors. The suggested method enabled the base station and mobile user to jointly search a small number of beams within the error bounds of the noisy location information. Additionally, <cit.> proposed a method for beamforming that tracked the spatial correlation of the strong pathways that were currently accessible between the transmitter and the receiver. They demonstrated the robustness of their approach to position information uncertainty and how it could reliably maintain a connection with a user who was travelling along a trajectory. In summary, all the above-mentioned research shows the effectiveness of knowing the user location either static or on the move to leverage the existing cellular communications. The location information can assist in predicting the precoding weights accurately and eventually steering the beam in the direction of the user with low latency and high throughput. §.§ Datasets for Contextual Beamforming Classification [RQ:2] Researchers often need datasets that contain location data, audio signals, and other pertinent elements to identify contextual beamforming approaches. The following are a few examples of datasets that have been applied in earlier research: §.§.§ TUT Acoustic Scenes 2017 dataset: This dataset includes location-specific metadata and audio recordings of diverse acoustic scenes, including street traffic, parks, and retail malls. It has been employed in numerous research to gauge how well contextual beamforming methods function <cit.>.<cit.> introduced the acoustic scene classification task of DCASE 2018 Challenge and the TUT Urban Acoustic Scenes 2018 dataset provided for the task, and evaluates the performance of a baseline system in the task. The TUT Urban Acoustic Scenes 2018 dataset consisted of 10 different acoustic scenes recorded in 6 large European cities, making it more acoustically variable than previous datasets used for this task. The dataset included high-quality binaural recordings as well as data recorded with mobile devices. The baseline system consisting of a convolutional neural network achieved good performance in the subtasks using the recommended cross-validation setup. Also, <cit.> introduced the TUT Acoustic Scenes 2016 database, which was a collection of binaural recordings from 15 different acoustic environments. A subset of this database called the TUT Sound Events 2016, was annotated to mark sound events. The paper presented the recording and annotation procedure, the database content, and the performance of a supervised acoustic scene classification system and event detection baseline system. §.§.§ DEMAND dataset: The DEMAND dataset includes audio recordings of metropolitan settings, including as parking lots, train stations, and street traffic. Associated metadata is also included, such as GPS positions and noise levels. Studies to assess the efficacy of contextual beamforming methods have utilised this dataset <cit.>. <cit.> introduced the Diverse Environments Multi-channel Acoustic Noise Database (DEMAND), which was a set of 16-channel noise files recorded in a variety of indoor and outdoor settings. The data was recorded using a planar microphone array consisting of four staggered rows, with the smallest distance between microphones being 5 cm and the largest being 21.8 cm. §.§.§ The CHiME-4 dataset: This collection of audio recordings comprises conversations that were overheard in places like offices, homes, and cafes. Additionally, it contains metadata like noise levels and position data. Studies evaluating the effectiveness of contextual beamforming methods for enhancing speech recognition accuracy have made use of the CHiME-4 dataset <cit.>. This dataset is used in some research such as in <cit.>, a deep eigenvector beamformer was proposed as a front-end for robust speech recognition in adverse environments. Data augmentation was performed by modulating the amplitude and time scale of the audio. The proposed system achieved a word error rate of 4.22% on the real development and 8.98% on the real evaluation data for 6 channels and 6.45% and 13.69% for 2 channels, respectively. Also, in <cit.>, the CHiME corpus was designed to enable noise-robust speech processing research. The corpus included 40 hours of background recordings from a head and torso simulator in a domestic setting and a comprehensive set of binaural impulse responses. The data had been mixed to produce a controlled and natural range of SNRs for speech separation, enhancement, and recognition algorithms §.§.§ Dataset for TAU Spatial Sound Events 2019: This collection includes location-related metadata and audio recordings of a variety of spatial sound occurrences, including footfall, glass breaking, and doors closing. Studies have used it to gauge how well contextual beamforming systems perform at locating and detecting sound occurrences <cit.>. In <cit.>, The STARS22 dataset was a high-resolution dataset of spatial recordings of real scenes with sound event annotations. The dataset was captured with a high-resolution spherical microphone array and delivered in two 4-channel formats, first-order Ambisonics and tetrahedral microphone array. Also, <cit.> introduced the TUT Acoustic Scenes 2016 database, which was a collection of binaural recordings from 15 different acoustic environments. A subset of this database called the TUT Sound Events 2016, was annotated to mark sound events. The paper presents the recording and annotation procedure, the database content, and the performance of a supervised acoustic scene classification system and event detection baseline system. §.§.§ Vehicular Networks Dataset (VeND): The University of California, Los Angeles (UCLA) created this dataset, which includes observations from a vehicular network testbed. The dataset contains data about the cars and their movements in addition to details about the wireless channel, such as the signal-to-noise ratio (SNR) and the channel impulse response (CIR) <cit.>. <cit.> presented a realistic synthetic dataset, covering 24 hours of car traffic in a 400-km^2 region around the city of koln, in Germany. The dataset captures both the macroscopic and microscopic dynamics of road traffic over a large urban region. Incomplete representations of vehicular mobility may result in over-optimistic network connectivity and protocol performance. §.§.§ 5G-VICTORI: is a project financed by the European Union that aims to create 5G technologies for a range of applications, including vehicular communication. With regard to vehicular communication, the project has created a number of datasets, including assessments of the radio frequency (RF) channel and network performance in practical settings <cit.>. <cit.> discussed how the new 5G network technology would impact the digitalization of various industries, including modern railway transportation. The Future Railway Mobile Communication System (FRMCS) service requirements and system principles were well-mapped to 5G concepts, but deployment paradigms needed to be established to prove their effectiveness. The 5G-VICTORI project aimed to deliver a complete 5G solution for railway environments and FRMCS services, and this paper discussed the Key Performance Indicators and technical requirements for an experimental deployment in an operational railway environment in Greece. §.§.§ 5G-EmPOWER: This project, which is also supported by the European Union, aims to develop 5G technology for a range of applications, including vehicular communication. With regard to vehicular communication, the project has created a number of datasets, including assessments of the RF channel and network performance in practical settings <cit.>. 3GPP is embracing the concept of Control-User Plane Separation (a cornerstone concept in SDN) in the 5G core and the Radio Access Network (RAN). An open-source SDN platform for heterogeneous 5G RANs has been introduced, which builds on an open protocol that abstracts the technology-dependent aspects of the radio access elements. The effectiveness of the platform has been assessed through three reference use cases: active network slicing, mobility management, and load-balancing <cit.>. §.§.§ NS-3: The Network Simulator 3 (NS-3) is an open-source network simulator that is useful for simulating and modelling vehicular communication in 5G networks. In addition to mobility models for simulating the movement of vehicles, NS-3 has various built-in modules for modelling the wireless channel <cit.>. <cit.> presented a framework for the ns-3 network simulator for capturing data from inside an experiment, subjecting it to mathematical transformations, and ultimately marshalling it into various output formats. A framework for capturing data from inside an experiment, subjecting it to mathematical transformations, and ultimately marshalling it into various output formats is presented. The application of this functionality is illustrated and analyzed via a study of common use cases. The design presented provides lessons transferrable to other platforms. §.§.§ Connected automobiles and Cities: The National Renewable Energy Laboratory (NREL) created this dataset, which contains information from a field investigation of connected automobiles in a smart city setting. The dataset contains details about, among other things, network performance, traffic flow, and vehicle trajectories<cit.>. In <cit.>, big data from the cellular network of the Vodafone Italy Telco operator can be used to compute mobility patterns for smart cities. Five innovative mobility patterns have been experimentally validated in a real industrial setting and for the Milan metropolitan city. These mobility patterns can be used by policymakers to improve mobility in a city, or by Navigation Systems and Journey Planners to provide final users with accurate travel plans. §.§.§ DeepSense6G: DeepSense 6G is a collection of data that includes different types of sensing and communication information, such as wireless communication, GPS, images, LiDAR, and radar. This data was gathered in real-life wireless environments and represents the world's first large-scale dataset of this kind. The dataset contains over one million samples of this multi-modal sensing-communication data and was collected in over 30 different scenarios to target various applications. The collection of data was done at several indoor and outdoor locations with high diversity and during different times of the day and weather conditions. Additionally, there are tens of thousands of data samples that have been labelled both manually and automatically. Also in <cit.>, the DeepSense 6G dataset is a large-scale dataset based on real-world measurements of co-existing multi-modal sensing and communication data. The DeepSense dataset structure, adopted testbeds, data collection and processing methodology, deployment scenarios, and example applications are detailed in the paper. The paper aims to facilitate the adoption and reproducibility of multi-modal sensing and communication datasets. The researchers (<cit.>) had a 400 GB dataset containing hundreds of thousands of WiFi transmissions collected "in the wild" with different Signal-to-Noise Ratio (SNR) conditions and over different days. They also had a dataset of transmissions collected using their own software-defined radio testbed, and a synthetic dataset of LTE transmissions under controlled SNR conditions. §.§.§ SUMO: SUMO (Simulation of Urban Mobility) is an open-source traffic simulation software that allows modelling and simulating traffic flow in urban areas. It can simulate individual vehicles, pedestrians, public transportation, and various road networks. SUMO has a variety of applications, including traffic planning, intelligent transportation systems, and autonomous driving. A synthetic dataset generator was developed to support research activities in mobile wireless networks. The generator uses traces from the Simulation of Urban MObility (SUMO) simulator and matches them with empirical radio signal quality and diverse traffic models. A dataset was created in an urban scenario in the city of Berlin with more than 6h of duration, containing more than 40000 UEs served by 21 cells <cit.>. The development and evaluation of contextual beamforming techniques for vehicle communication in the 5G network rely heavily on datasets. Researchers have utilized various datasets from the literature to evaluate and investigate the effectiveness of these techniques. This systematic review critically analyzes the datasets used by researchers worldwide to incorporate contextual beamforming techniques in vehicular communication. <cit.> presented the 5G3E dataset, which contains thousands of time series related to the observation of multiple resources involved in 5G network operation. The dataset was created to support 5G network automation. The 5G3E dataset contained thousands of time series related to the observation of multiple resources involved in 5G network operation. The variety of collected features ranged from radio front-end metrics to physical server operating system and network function metrics. The testbed was deployed to support the creation of traffic starting from real traffic traces of a commercial network operator. Another dataset is the 5G trace dataset from a significant Irish mobile operator introduced by <cit.>. This paper presented a 5G trace dataset collected from a major Irish mobile operator. The dataset was generated from two mobility patterns (static and car) and across two application patterns (video streaming and file download). The dataset was composed of client-side cellular key performance indicators (KPIs) comprised of channel-related metrics, context-related metrics, cell-related metrics and throughput information. Additionally, The authors provided a 5G large-scale multi-cell ns-3 simulation framework to supplement our real-time 5G production network dataset. This framework allowed other researchers to investigate the interaction between users connected to the same cell through the generation of their own synthetic datasets. <cit.> curated SPEC5G, the first publicly accessible 5G dataset for natural language processing (NLP) research. The dataset contains 134M words in 3,547,587 phrases taken from 13 online websites and 13094 cellular network specs. The authors utilized this dataset for security-related text categorization and summarization by utilizing large-scale pre-trained language models. For protocol testing, pertinent security-related attributes were also extracted using text classification techniques. Additionally, <cit.> presented a novel mobility dataset generation method for 5G networks based on users' GPS trajectory data. It aggregated the user's GPS trajectories and models his location history by a mobility graph representing the cell base stations he passed through. The generated dataset contained the mobility graph records of 128 users. The user mobility dataset for 5g networks based on GPS geolocation is valuable for predicting user mobility patterns. In another study, <cit.> discussed a methodology for collecting a labelled dataset for a 5G network. It described how to build a 5G testbed and use it to collect data. This data can then be used to construct a 5G-based labelled dataset. A 5G testbed was built to observe 5G network features by replaying the collected data. A specialized network collector system was implemented to collect 5G edge network traffic data. A re-collecting methodology using the proposed 5G testbed and network collector can be used to construct a 5G-based labelled dataset for supervised learning methods. Also, <cit.> used the results of a publicly available measurement campaign of 5G users and analyzes various figures of merit. The analysis showed that the downlink and uplink rates for static and mobile users can be captured either by a lognormal or a Generalized Pareto distribution. Downlink and uplink rates for static and mobile users can be captured either by a lognormal or a Generalized Pareto distribution. Time spent in the same cell by a mobile (driving) user can be captured to the best extent by a Generalized Pareto distribution. Prediction of the number of active users in the cell is possible. Furthermore, <cit.> discussed 5G Tracker - a crowdsourced platform that includes an Android app to record passive and active measurements tailored to 5G networks and research. It has been used for over 8 months and has collected over 4 million data points. The platform is useful for building the first-of-a-kind, interactive 5G coverage mapping application. 5G Tracker is a crowdsourced platform to enable research using commercial 5G services. 5G performance is affected by several user-side contextual factors such as user mobility level, orientation, weather, location dynamics, and environmental features. 5G Tracker has been used to collect over 4 million data points, consuming over 50 TB of cellular data across multiple 5G carriers in the U.S. Moreover, bringing computational and storage technologies closer to end users with strategically deployed and opportunistic processing and storage resources, mobile edge computing in the 5G network was developed by <cit.> as a very attractive computation architecture. This paper used data mining and statistical methods to analyze Baidu website data. The analysis results gave suggestions to improve the design and development of 5G services. Data mining and statistical analysis of Baidu cloud services in 5G network revealed that clustering, outlier detection, prediction, and statistical methods can be used to evaluate smart city services. The analysis results provided insights into the design and development of 5G services (API website). The findings suggested that mobile edge computing in 5G networks can be used to improve the performance of smart city services. Finally, <cit.> discussed a project to collect "three-level" edge layer data from equipment manufacturers, equipment users, and spare parts manufacturers. The goal was to establish a unified data standard and help companies build general services around equipment data. 5G+ Industrial Internet is used to collect data from three levels of edge layers: spare parts manufacturers, equipment manufacturers and equipment users. Data is assimilated and unified into a single standard. A large-scale and shallow informatization project of incremental equipment is implemented in the Yangtze River Delta in a short period of time. Also, the table lists some 5G data sets consisting of Channel state information (CSI), Phase, Received Power, etc. that can be used for user localisation. §.§ Optimization techniques for Contextual Beamforming [RQ:3] Contextual beamforming models can be optimised using various strategies to ensure real-time processing. One method to accelerate computations is through hardware acceleration techniques like GPU processing. Additionally, pre-trained models or reducing the number of parameters in the model design can be used to optimise the model. To decrease the model size and boost computational effectiveness, techniques like pruning, quantization, and knowledge distillation can be applied. Improving the feature extraction and input data pre-processing phases can also help with real-time processing. Creating specialised algorithms and optimisation strategies adapted for certain hardware and deployment conditions can also enhance the performance of contextual beamforming models. Below are some of the possible ways to optimise contextual beamforming models for real-time processing: §.§.§ Model simplification: Simplifying the model architecture, such as reducing the number of layers or the number of neurons in each layer, can improve computational efficiency and reduce processing time. For instance, the research suggests a strategy that uses machine learning and hardware performance counter data to optimise power and performance for GPU-based systems. The model can accurately detect power-performance bottlenecks and provide optimization techniques for a variety of sophisticated compute and memory access patterns. The model, which has been validated on NVIDIA Fermi C2075 and M2090 GPUs as well as the Keeneland supercomputer at Georgia Tech, is more reliable and accurate than existing GPU power models <cit.>. §.§.§ Hardware acceleration: Dedicated hardware, such as graphics processing units (GPUs), field-programmable gate arrays (FPGAs), or application specific integrated circuits (ASICs), can speed up the processing of contextual beamforming models by performing parallel computations. In order to determine the direction of incoming signals, beamforming is a signal processing technique that combines signals from a number of receivers. Although it overcomes noise interference, adaptive beamforming (ABF) is computationally expensive. On current graphics processing units (GPUs), ABF can be implemented in parallel. ABF can be parallelized on an NVIDIA GPU using the author's method, which has a lesser throughput than the serial implementation but is still able to be improved <cit.>. §.§.§ Optimization techniques: Various optimization techniques, such as weight pruning, quantization, and knowledge distillation, can be applied to contextual beamforming models to reduce their computational complexity and memory footprint without significant loss in accuracy. This research <cit.> explores the use of the deep neural network (DNN) model as the teacher to train recurrent neural networks (RNNs), specifically long short-term memory (LSTM), for automated voice recognition (ASR). The method successfully trains RNNs without the use of additional learning methods, even with a small amount of training data. §.§.§ Preprocessing and postprocessing: Preprocessing the input data to reduce its dimensionality or complexity, and postprocessing the output data to refine the results or reduce noise, can help improve the performance and efficiency of contextual beamforming models. With the use of deep learning, a low-complexity precoding design approach for multiuser MIMO systems is suggested in <cit.>. The suggested method uses methods such as input dimensionality reduction, network pruning, and recovery module compression to produce a performance that is comparable to the conventional WMMSE algorithm with relatively little computational cost. §.§.§ Real-time learning: Using online or incremental learning algorithms, instead of offline or batch learning, can enable contextual beamforming models to adapt to changing conditions in real-time and reduce the need for frequent retraining. In <cit.>, two adaptive learning approaches such as ADAM and RAL are proposed for the real-time detection of network assaults in Internet network traffic. These methods achieve excellent detection accuracy even in the presence of idea drifts by dynamically learning from and adapting to non-stationary data streams while lowering the demand for labelled data. §.§.§ Model parallelism: Breaking the model into smaller sub-models and processing them in parallel can improve the overall processing speed of contextual beamforming models. This can be done using techniques such as data parallelism or model parallelism. With a focus on model and data parallelization, <cit.> addresses distributed machine learning architecture and topology. It analyses machine learning algorithms and offers parallelization suggestions. The specific needs and demands of communications networks, such as resource allocation and trade-offs between privacy and security, are not addressed. §.§.§ Early termination: Stopping the model's processing early when a certain threshold is reached can reduce unnecessary computation, especially in cases where the output has already converged. <cit.> promotes early pausing before convergence to prevent overfitting and suggests the use of cross-validation to detect overfitting during neural network training. The study uses multi-layer perceptrons with RPROP training to assess the effectiveness and efficiency of 14 distinct automatic stopping criteria from three classes for a variety of activities. The findings indicate that slower stopping criteria slightly improve generalisation, although training time often increases by a factor of four. The choice of optimization techniques will depend on the specific requirements of the application and the constraints of the hardware platform. A combination of these techniques can be used to achieve the best balance between performance and efficiency for the real-time processing of contextual beamforming models. §.§ AI, ML and DL approaches for Contextual Beamforming [RQ:4] AI, is a term that encompasses a vast array of techniques and technologies that grant machines the ability to perform tasks that would usually demand human intelligence, such as learning, problem-solving, and decision-making. Within AI, there are two primary categories: narrow or weak AI and general or strong AI. Narrow or weak AI machines are designed to accomplish specific tasks, while general or strong AI strives to create machines capable of performing any cognitive task a human can do. In contrast, machine learning (ML) is a subfield of AI that specializes in the development of algorithms and statistical models that enable machines to improve their performance on a task over time by learning from data. ML algorithms can be classified into three primary categories: supervised learning, unsupervised learning, and reinforcement learning. In supervised learning, the algorithm is trained on labelled data, where the correct output is already known, to forecast new outputs for unseen data. In unsupervised learning, the algorithm is trained on unlabeled data to identify patterns or structures in the data. In reinforcement learning, the algorithm learns through trial and error, receiving feedback in the form of rewards or penalties based on its actions. Regarding beamforming, AI can refer to any technique that allows machines to enhance the quality or efficiency of beamforming by learning from data, making predictions or decisions based on that data, and adapting to changing conditions. Machine learning is a specific subset of AI that utilizes algorithms and statistical models to enable machines to learn from data without explicit programming. The amalgamation of beamforming and artificial intelligence (AI) represents a compelling advancement in signal processing and communication systems. Beamforming, a technique used in radio communications and signal processing, involves the direction of a signal toward a particular location or direction. By integrating signals from multiple antennas, beamforming amplifies signals in the desired direction while suppressing interference from other directions. On the other hand, AI employs computational algorithms and computer programs that can learn from existing data to make decisions or predictions. AI finds application in various domains, including natural language processing (NLP), image recognition, and robotics. By combining beamforming and AI, communication systems can witness remarkable improvements in their performance. AI algorithms can scrutinize signals received by multiple antennas and determine the optimal beamforming configuration for a given situation. This can result in enhanced signal quality and reduced interference. Additionally, AI can dynamically adjust beamforming parameters in response to current environmental and signal characteristics using reinforcement learning or other AI techniques, which is particularly useful in complex and dynamic environments where traditional beamforming techniques may struggle to adapt. AI is also beneficial for optimizing beamforming algorithms themselves by adjusting the parameters employed to combine signals from different antennas. This can enhance the accuracy and efficiency of the beamforming process, leading to more reliable communication. Furthermore, beamforming and AI can significantly improve the performance of communication systems in various applications, from cellular networks to satellite communication systems. Recent research has delved into AI-assisted contextual beamforming, which can be optimized using AI algorithms to filter out unwanted noise from the signal or to automatically identify the location of a sound source. This can be accomplished by training models on datasets of sound signals and corresponding locations and using the models to predict the location of new sound sources or to identify and remove noise from new signals. By pointing the microphone array towards the predicted location, sound can be captured more effectively. In multi-user multiple-input-single-output (MISO) systems, beamforming is a useful way to improve the quality of incoming signals. Traditionally, finding the best beamforming solution has relied on iterative techniques, which have significant processing delays and are unsuitable for real-time applications <cit.>. With recent advancements in deep learning (DL) algorithms, identifying the best beamforming (BF) solution in real-time while taking into account both performance and computational delay has become possible. This is accomplished by offline training of neural networks before online optimization, allowing the trained neural network to identify the optimal BF solution. This approach reduces computational complexity during online optimization, requiring only simple linear and nonlinear operations <cit.>. Figure <ref> illustrates the neural network architecture for BF, which comprises input, neural layers, and output to extract features for further processing. In complicated indoor or outdoor contexts with multiple pathways, propagation loss, noise, and Doppler effects create additional issues. Chong Liu's approach involves employing a machine learning regression method based on efficient BF transmission patterns to predict the position of users on the move, following the collection of large volumes of Line-of-Sight (LOS) and Non-Line-of-Sight (NLoS) data <cit.>. In the domain of location estimation, Bhattacharjee et al. presented two distinct approaches for training neural networks, one using channel parameters as features and the other using a channel response vector, and evaluated the results using preliminary computer simulations <cit.>. The same group also conducted experimental work on the localization of drones and other application areas using different approaches <cit.>. Wang et al. proposed a weighted loss function to enhance the performance of localization with sparse sensor layouts, achieving an accuracy boost of over 50% <cit.>. We also presented results for future location estimation of mobile users using a deep neural network in <cit.>. Contextual Beamforming in 5G vehicle communication has been implemented using various ML and AI techniques. The performance and precision of beamforming systems, which are essential for efficient communication in moving situations, are to be improved by these techniques. ML has the potential to significantly advance 5G technology, as evidenced by the growing complexity of constructing cellular networks. Deep learning has demonstrated effectiveness in ML tasks like speech recognition and computer vision, with performance growing as more data is accessible. The proliferation of deep learning applications in wireless communications is constrained by the scarcity of huge datasets. To create channel realisations that accurately depict 5G scenarios with mobile transceivers and objects, this study describes an approach that combines a car traffic simulator with a raytracing simulator. The following section of the review offers a unique dataset along with various ML as well as AI techniques used for examining millimetre wave beam selection methods for car-to-infrastructure communication. The application of datasets produced with the suggested methodology is demonstrated by experiments including deep learning in classification, regression, and reinforcement learning problems <cit.>. §.§.§ Deep Learning Techniques: The extraction of valuable characteristics from input signals and the provision of more precise predictions have been accomplished using deep learning techniques like CNNs and RNNs. For instance, Wang et al. used deep learning to simplify beamforming weight estimation in 5G systems. They developed a channel model and trained convolutional neural networks on generated data. The networks predicted beamforming weights based on channel data, reducing complexity. Results show the potential of deep learning for digital and hybrid beamforming, and performance comparison with conventional techniques was presented <cit.>. Also, <cit.> proposed a method that aims to improve the performance of Random Forest, Multilayer Perceptron, and k-Nearest Neighbors classification models by increasing the amount of data through synthetic data inclusion. Their experimental results showed that the inclusion of synthetic data improved the macro F1 scores of the models. The Random Forest, Multilayer Perceptron, and k-Nearest Neighbors achieved macro F1 scores of 0.9341, 0.9241, and 0.9456, respectively, which are higher than those obtained with the original data only. <cit.> proposed a deep learning-based fast-beamforming design method for sum rate maximization under a total power constraint. The method was trained offline using a two-step training strategy. Simulation results demonstrated that the proposed method is fast while obtaining a comparable performance to the state-of-the-art method. They derived a heuristic solution structure of the downlink beamforming through the virtual equivalent uplink channel based on the optimum MMSE receiver. BPNet is designed to perform the joint optimization of power allocation and VUB design and is trained offline using a two-step training strategy. A DL-enabled beamforming neural network (BFNN) is proposed which can optimize the beamformer to attain better spectral efficiency. Simulation findings reveal that the proposed BFNN achieves significant performance gain and high robustness to imperfect CSI. The proposed BFNN greatly decreases the computational complexity compared to conventional BF algorithms. Spectral Efficiency, Performance Gain, Robustness To Imperfect Csi, and Computational Complexity (Measured In Floating Point Operations) are the main outcomes of BFNN <cit.>. <cit.> proposed a beamforming neural network (BNN) for the power minimization problem in multi-antenna communication systems. The BNN was based on convolutional neural networks and the exploitation of expert knowledge. It achieved satisfactory performance with low computational delay. A deep fully convolutional neural network (CNN) was used for beamforming, providing considerable performance gains. The CNN was trained in a supervised manner considering both uplink and downlink transmissions with a loss function based on UE receiver performance. The neural network predicted the channel evolution between uplink and downlink slots and learned to handle inefficiencies and errors in the whole chain, including the actual beamforming phase <cit.>. A deep learning model that learns how to use these signatures to predict the beamforming vectors at the BSs. <cit.> discussed a novel integrated machine learning and coordinated beamforming solution to support highly-mobile mmWave applications. The solution used a deep learning model to learn how to use signatures to predict the beamforming vectors at the base stations. This rendered a comprehensive solution that supports highly mobile mmWave applications with reliable coverage, low latency, and negligible training overhead. <cit.> proposed a deep learning–based energy beamforming scheme for a multi-antennae wireless powered communication network (WPCN). We used offline training for the deep neural network (DNN) to provide a faster solution to the real-time resource allocation optimization problem. Simulation results showed that the proposed DNN scheme provided a fair approximation of the traditional sequential parametric convex approximation (SPCA) method with low computational and time complexity. §.§.§ Supervised ML Techniques: Different sorts of acoustic environments have been classified and predicted using supervised learning methods like SVMs and decision trees. A modified SVM technique is proposed for 3D MIMO beamforming in 5G networks. The Advanced Encryption Standard algorithm is employed for more security, and interference is reduced in two stages. The suggested ML-3DIM method outperforms existing methods in terms of throughput, SINR, and SNR by up to 20%, 30%, and 35%, respectively, according to simulation results <cit.>. <cit.> investigated the machine learning-based beamforming design in two-user MISO interference channels. It proposed a machine learning structure that takes transmit power and channel vectors as input and then recommends two users' choices between MRT and ZF as output. The numerical results showed that our proposed machine learning-based beamforming design well finds the best beamforming combination and achieved a sum rate of more than 99.9% of the best beamforming combination. <cit.> introduced an SVM-based approach for linear array processing and beamforming. It showed how the new minimization approach can be applied to the problem of linear beamforming. BER performance of LS and SVM for different noise levels from 0 to 15 dB. A machine learning (ML) beamforming approach based on the k-nearest neighbours ( k -NN) approximation has been considered, which was trained to generate the appropriate beamforming configurations according to the spatial distribution of throughput demand. Performance was evaluated statistically, via a developed system-level simulator that executes Monte Carlo simulations in parallel. The ML-assisted beamforming framework achieved up to 5 Mbits/J and 36 bps/Hz in terms of energy efficiency (EE) and spectral efficiency (SE), respectively, with reduced hardware and algorithmic complexity <cit.>. BeamMaP was a beamforming-based machine learning model for positioning in massive MIMO systems. Simulation results showed that BeamMaP achieved Reduced Root-Mean-Squared Estimation Error (RMSE) performance with an increasing volume of training data. BeamMaP was more efficient and steady in the positioning system compared with kNN and SVM <cit.>. <cit.> discussed a machine learning method for beamforming at the receiver side antennas for deploying Line-of-Sight (LOS) communication in Satellite Communication (Satcom). It described how the antenna array weights are pre-calculated for a number of beam directions and kept as a database. The signal weights that were calculated for each array element by using their progressive measured phase difference were due to the arriving signal, which was given as input to a linear regression machine learning model and the direction of arrival (DOA) of the signal is predicted. A method for determining an appropriate precoder from the knowledge of the user’s location only was proposed. The proposed method involved a neural network with a specific structure based on random Fourier features allowing to learn functions containing high spatial frequencies. The proposed method was able to handle both line-of-sight (LOS) and non-line-of-sight (NLOS) channels <cit.>. §.§.§ Unsupervised ML Techniques: Unsupervised learning methods like clustering and PCA have been used to spot trends and put related data points in one category. For instance, <cit.> proposes a beamforming algorithm for fifth-generation and later communication systems. The approach combines the benefits of conventional optimization-based beamforming techniques with deep learning-based techniques. To create initial beamforming, a novel neural network architecture is proposed, and performance is increased by building a deep unfolding module. The entire algorithm is unsupervised and trained, and simulation results demonstrate enhanced performance and reduced computing complexity when compared to current approaches. <cit.> proposed a novel unsupervised learning approach to design the hybrid beamforming for any subarray structure while supporting quantized phase shifters and noisy CSI. No beamforming codebook was required, and the neural network is trained to take into account the phase-shifter quantization. Simulation results showed that the proposed deep learning solutions can achieve higher sum rates than existing methods. §.§.§ Reinforcement ML Techniques: Contextual Beamforming system performance has been enhanced using reinforcement learning techniques like Q-learning and policy gradient methods. For instance, to make network design and maintenance more straightforward, a brand-new intelligent algorithm for massive MIMO beamforming performance optimisation is proposed in this research. To produce accurate user mobility patterns, pertinent antenna designs, and an estimate of the effectiveness of the generated antenna diagrams, the system uses three neural networks that apply deep adversarial reinforcement learning workflow. This method has the advantage of learning independently and without requiring big training datasets <cit.>. <cit.> investigated the use of deep reinforcement learning to predict coordinated beamforming strategy in an ultra-dense network. It was found that the optimal solution is a balanced combination of selfish and altruistic beamforming. The beamforming vectors were obtained efficiently through the learned balancing coefficients. A reinforcement learning (RL) based algorithm for cognitive beamforming was proposed for multi-target detection in massive multiple input multiple outputs (MMIMO) cognitive radars (MMIMO CR). The proposed RL-based algorithm outperformed the conventional omnidirectional approach with equal power allocation in terms of target detection performance. The performance improvement was even more remarkable under environmentally harsh conditions such as low SNR, heavy-tailed disturbance and rapidly changing scenarios <cit.>. <cit.> proposed a blind beam alignment method based on RF fingerprints of user equipment obtained from base stations. They used deep reinforcement learning on a multiple-base station cellular environment with multiple mobile users. Achieved a data rate of up to four times the data rate of the traditional method without any overheads. <cit.> proposed a novel multiagent reinforcement learning(MARL) formulation for codebook-based beamforming control. It took advantage of the inherently distributed structure in a wirelessly powered network and laid the groundwork for fully locally computed beam control algorithms. A cognitive beamforming algorithm based on Reinforcement Learning (RL) framework is proposed for colocated MIMO radars. The proposed RL-based beamforming algorithm is able to iteratively sense the unknown environment and synthesize a set of transmitted waveforms tailored to the acquired knowledge. The performance of the proposed RL-based beamforming algorithm is assessed in terms of Probability of Detection (P_D) <cit.>. <cit.> proposed a reinforcement learning (RL) approach called combinatorial multi-armed bandit (CMAB) framework to maximize the overall network throughput for multi-vehicular communications. They proposed an adaptive combinatorial Thompson sampling algorithm, namely adaptive CTS, and a sequential Thompson sampling (TS) algorithm for the appropriate selection of simultaneous beams in a high-mobility vehicular environment. Simulation results showed that both of our proposed strategies approach the optimal achievable rate achieved by the genie-aided solution. §.§.§ Hybrid Techniques: Contextual Beamforming system performance has also been improved using hybrid methods that incorporate several ML and AI techniques, such as deep reinforcement learning. To address the hybrid beamforming issue in huge MIMO systems, deep reinforcement learning is suggested. The suggested techniques reduce computing complexity while achieving spectral efficiency performance that is close to ideal <cit.>. Hybrid beamforming, combining digital baseband precoders and analogue RF phase shifters, is an effective technique for millimetre wave (mmWave) communications and massive multiple-in-multiple-out (MIMO) systems. Machine learning techniques can be used to improve the achievable spectral efficiency of hybrid beamforming systems. The proposed two-step algorithm can attain almost the same efficiency as that can be achieved by fully digital architectures <cit.>. <cit.> described the design of ML-based hybrid beamforming for multiple users in systems that use millimetre waves (mmWaves) and massive MIMO architectures. The simulation results showed that the ML-based hybrid beamforming architecture can achieve the same spectral efficiency (bits/sec/Hz) as the fully digital beamforming designs with negligible error for both single-user and multi-user Massive-MIMO scenarios. <cit.> proposed a novel RSSI-based unsupervised deep learning method to design the hybrid beamforming in massive MIMO systems. They proposed a method to design the synchronization signal (SS) in initial access (IA) and a method to design the codebook for the analog precoder. They showed that the proposed method not only greatly increases the spectral efficiency especially in frequency-division duplex (FDD) communication by using partial CSI feedback, but also has a near-optimal sum-rate and outperforms other state-of-the-art full-CSI solutions. Deep neural networks (DNNs) can be used to approximate the singular value decomposition (SVD) and design hybrid beamformers. DNN-based hybrid beamforming improved rates by up to 50-70% compared to conventional hybrid beamforming algorithms and achieved a 10-30% gain in rates compared with the state-of-the-art ML-aided hybrid beamforming algorithms. The proposed approach had low time complexity and memory requirements <cit.>. A federated learning (FL) based framework for hybrid beamforming was proposed, where the model training was performed at the base station (BS) by collecting only the gradients from the users. A convolutional neural network was designed, in which the input was the channel data, yielding the analog beamformers at the output. FL was demonstrated to be more tolerant to the imperfections and corruptions in the channel data as well as having less transmission overhead than centralized machine learning (CML) <cit.>. § CHALLENGES ASSOCIATED WITH USING AI, ML AND DL TECHNIQUES FOR CONTEXTUAL BEAMFORMING Contextual beamforming is a technique used in signal processing and communication systems to improve the quality of sound or data transmission by focusing the transmitted or received signals in a specific direction or area of interest. Machine learning (ML) techniques have been increasingly used to optimize the performance of contextual beamforming systems. However, there are several challenges associated with using ML techniques for contextual beamforming: * Lack of training data: ML techniques require a large amount of data to be trained effectively. However, in contextual beamforming, it may be difficult to collect enough data that accurately represents the various environments and scenarios in which the system will be used. This can result in underfitting or overfitting of the model, leading to poor performance. * Complexity of the models: ML models used for contextual beamforming can be quite complex, with many parameters that need to be tuned. This can make the training process difficult and time-consuming, and can also increase the risk of overfitting. * Robustness to environmental changes: Contextual beamforming systems need to be robust to changes in the environment, such as changes in noise levels or the location of sound sources. ML models may not be able to adapt to these changes quickly enough, resulting in reduced performance. * Limited interpretability: ML models can be difficult to interpret, which can make it hard to understand why the system is behaving in a certain way or to diagnose problems when they occur. * Limited generalizability: ML models trained on one set of data may not generalize well to other datasets or environments. This can limit the applicability of the system in real-world scenarios. To address these challenges, researchers are exploring new techniques such as transfer learning, which involves pretraining models on large datasets and then fine-tuning them on smaller, task-specific datasets. They are also working on developing more interpretable ML models and incorporating robustness and adaptability into the models. These challenges have been discussed in several research papers, including Another study proposed by <cit.> discussed a novel approach to enhance the Feature Engineering and Selection (eFES) Optimization process in ML. eFES was built using a unique scheme to regulate error bounds and parallelize the addition and removal of a feature during training. Results showed the promising state of eFES as compared to the traditional feature selection process. A weak convolutional network can be used to provide rough label maps over the neighbourhood of a pixel. Incorporating this weak learner in a bigger network can improve the accuracy of state-of-the-art architectures. The approach in <cit.> was generic and can be applied to similar networks where contextual cues are available at training time. A Multicriteria technique has been developed that allows for the control of feature effects on the model’s output. Knowledge functions have been integrated to accommodate for more complex effects and local lack of information. A Deep Learning training process that was both interpretable and compliant with modern legislation has been developed by <cit.>. <cit.> proposed a technique to improve the interpretability in transfer learning tasks by defining interpretable features. They examined the interpretability of transfer learning by applying a pre-trained model with defined features to Korean character classification. Feature Network (FN) consisted of Feature Extraction Layer and a single mapping layer that connected the features extracted from the source domain to the target domain. Also, <cit.> proposed an actor-critic model that allowed better generalization across goals and scenes. AI2-THOR framework enabled agents to take actions and interact with objects, allowing for efficient collection of training samples. Model converged faster than state-of-the-art deep reinforcement learning methods, generalized to real robot scenarios with minimal fine-tuning, and is end-to-end trainable. § POTENTIAL APPLICATIONS OF CONTEXTUAL BEAMFORMING Some potential applications of contextual beamforming for 5G technology include: * Improved coverage: Contextual beamforming can help extend the coverage of 5G networks by focusing the transmission beam towards the receiver. This can help overcome obstacles such as buildings and trees that may obstruct the signal. * Higher data rates: By directing the signal towards the receiver, contextual beamforming can help increase the data rates of 5G networks. This can enable faster downloads and uploads, as well as smoother streaming of high-definition content. * Reduced interference: Contextual beamforming can help reduce interference from other devices or networks by steering the transmission beam away from sources of interference. This can improve the reliability and quality of 5G connections. * Energy efficiency: By directing the transmission beam towards the receiver, contextual beamforming can reduce the amount of energy required to transmit the signal. This can help improve the energy efficiency of 5G networks, which is an important consideration for mobile devices that rely on battery power. § OUR CONTRIBUTION TOWARDS LOCALISATION AND BEAMFORMING Optimal BF strikes a balance between giving maximum power to a single user while decreasing or eliminating signal interference at other users. When the maximum ratio transmission (MRT) BF technique is used in an MU-MIMO system, the transmitter transmits a beam to every user according to its weight vector. The resultant power received by each user for the signal intended for that user is calculated as the product of the channel gain and weight vector. Because the MIMO system transmits to multiple users at the same frequency, a critical performance metric for the system is the signal-to-interference-plus-noise ratio (SINR) for each user. This concept has been proved in <cit.> that shows how SINR can be significantly improved by 28.83 dBm and/ or 53% by using MRT in comparison with no BF. We have also published some of the work in context to localization in <cit.>. These works show how location datasets can be extracted through ray tracing tools and how data can be utilized for location prediction using deep neural networks. In our recent paper(<cit.>), the usage of the core network is discussed to enable beamforming after calculating the direction of arrival, or in simple terms, the user's location at the MAC layer. The basic concept of this paper is shown in Figure <ref>. § CONCLUSION In this study, we provide an overview of advanced adaptive BF in which artificial intelligence techniques such as deep learning (DL) can be used. More importantly, we have shown that with access to contextual information such as prior user location, a wireless network's performance can be improved through deep learning techniques. With the development of exciting new technologies such as edge computing and federated learning, we believe the next generation of mobile networks will unlock new opportunities. Communication systems will continue to evolve as closed-loop systems where data extracted by observing a mobile user will be exploited to improve connectivity and network performance, such as the signal-to-noise ratio (SNR). We have touched upon some of the studies already underway that can harness a user's location, and develop a DL-enabled contextual beamforming strategy that can improve the SNR by 53% on average. IEEEtran
http://arxiv.org/abs/2307.03177v2
20230706175702
IPO-LDM: Depth-aided 360-degree Indoor RGB Panorama Outpainting via Latent Diffusion Model
[ "Tianhao Wu", "Chuanxia Zheng", "Tat-Jen Cham" ]
cs.CV
[ "cs.CV" ]
[ IPO-LDM: Depth-aided 360-degree Indoor RGB Panorama Outpainting via Latent Diffusion Model Tianhao Wu^⋆, Chuanxia Zheng^†, and Tat-Jen Cham^⋆ ^⋆Nanyang Technological University, ^†University of Oxford August 1, 2023 ========================================================================================================================= ] Generating complete 360° panoramas from narrow field of view images is ongoing research as omnidirectional RGB data is not readily available. Existing GAN-based approaches face some barriers to achieving higher quality output, and have poor generalization performance over different mask types. In this paper, we present our 360° indoor RGB panorama outpainting model using latent diffusion models (LDM), called . We introduce a new bi-modal latent diffusion structure that utilizes both RGB and depth panoramic data during training, but works surprisingly well to outpaint normal depth-free RGB images during inference. We further propose a novel technique of introducing progressive camera rotations during each diffusion denoising step, which leads to substantial improvement in achieving panorama wraparound consistency. Results show that our not only significantly outperforms state-of-the-art methods on RGB panorama outpainting, but can also produce multiple and diverse well-structured results for different types of masks. The code will be released soon. And our project page is at <https://sm0kywu.github.io/ipoldm/>. § INTRODUCTION Omnidirectional 360° RGB panoramas are helpful for various applications, such as lighting estimation <cit.> and new scene synthesis <cit.> in AR and VR. An obvious limitation, however, is that capturing, collecting and restoring such extensive datasets with 360° images is a high-effort and high-cost undertaking <cit.>, while manually creating a 3D space from scratch can be a demanding task <cit.>. To reduce the cost of collecting large 360° datasets, the latest learning methods <cit.> have been proposed, focusing on generating omnidirectional RGB panoramas from narrow field of view (NFoV) images. These methods are typically built upon Generative Adversarial Networks (GANs) <cit.>, which have achieved remarkable success in creating new content. However, GAN architectures face some notable problems, including 1) mode collapse (seen in Fig. <ref>(c)), 2) unstable training <cit.>, and 3) difficulty in generating multiple structurally reasonable objects <cit.>, which hinder its performance on synthesizing complex scenes (Fig. <ref>). In this paper, we propose an alternative method for 360° indoor RGB panorama outpainting via the latest latent diffusion models (LDMs) <cit.>, called . An important insight here is that a diffusion model directly adds noise to the spatial images or features through a Markov Chain over T steps, which results in a stable training of a generative model with consistent spatial resolution in each step. This characteristic is critical in our 360° panorama scenario, as it preserves the spatial information necessary for generating structurally reasonable objects. Although recent works have already applied diffusion models in image inpainting tasks <cit.>, it remains a challenge to directly apply them in our setting. Unlike previous inpainting works <cit.>, generating a 360° panorama from an NFoV image faces greater challenges: 1) the outpainting mask is significantly larger than traditional inpainting and 2) semantically reasonable objects have to be generated within the scene, instead of filling in with generic background textures which will create empty rooms (as shown in Fig. <ref> (c)). To achieve this, we creatively introduce depth information when training the diffusion model to aid the RGB generation. Our key motivation for doing so is that the depth information is crucial for helping the network understand the physical structure of objects and the layout of the scene <cit.>. Conversely, our proposed does not depend on depth input at all during inference (Fig. <ref>(b)), which enables applications such as panoramic outpainting from normal photos taken by casual users. Despite this lack of depth input, when compared to the state-of-the-art BIPS <cit.>, which uses depth for both training and testing, our method was able to achieve significant improvement on RGB outpainting (as seen in Fig. <ref>). While we recognize BIPS as being more focused on outpainting RGB-D panoramas, it is nonetheless rather interesting that even in the extreme case when the full ground-truth depth is provided to BIPS, and no depth is provided to our method, it is still able to achieve generally better RGB outpainting performance (Table <ref>). Another challenge of this task is the unique characteristic of panorama images: 3) the two ends of the image must be aligned to ensure the integrity and wraparound consistency of the entire space, the indoor scene itself does not have a beginning and an end. We present two strategies to enhance this property in the generated results. During the training process, a camera-rotation approach is used to randomly crop and stitch the images for data augmentation (Fig. <ref>). It encourages the networks to capture information from different views in a 360° panorama. More importantly, a two-end alignment mechanism is applied at each step of the denoising process (Fig. <ref>), which explicitly enforces the two ends of an image to be wraparound-consistent. We evaluate the proposed method on the Structured3D dataset <cit.>. Experimental results demonstrate that our not only significantly outperforms previous state-of-the-art 360° RGB panorama outpainting or inpainting methods, but is also able to provide multiple and diverse well-structured results for different types of masks (Fig. <ref>). In summary, our main contributions are as follows: * A new bi-modal latent diffusion structure that utilizes both RGB and depth panoramic data to better learn spatial layouts and patterns during training, but works surprisingly well to outpaint normal depth-free RGB images during inference; * A novel technique of introducing progressive camera rotations during each diffusion denoising step, which leads to substantial improvement in achieving panorama wraparound consistency; * Our not only significantly outperforms state-of-the-art methods on RGB panorama outpainting, but can also produce diverse well-structured results for different mask types. § RELATED WORK §.§ Image Inpainting/Outpainting Early image inpainting approaches <cit.> tended to focus on mining the input image for low-level patterns or clues to fill in missing areas. However, these methods often assume that the missing patches can be replicated from visible parts. These models usually do not work well in generating new content, nor capable of performing large-scale completion. Since the introduction of GANs <cit.>, most in/outpainting methods <cit.> started adopting the encoder-decoder + adversarial training approach, resulting in substantial progress. More recently, the denoising diffusion probabilistic model (DDPM) is an emerging alternative for generative modeling <cit.>, outperforming even GAN-based methods <cit.> for image synthesis. When applied to image outpainting, we can consider methods to fall into the two categories below: Mask-conditional. Here the DDPM is trained to outpaint conditioned on the input mask. The image is typically masked before the denoising stage, with the DDPM then trained to generate results that are visually consistent with the original image. A disadvantage is that such approaches can be sensitive to the training mask distribution, responding poorly to out-of-distribution masks. Unconditional. RePaint <cit.> used an unconditionally trained DDPM. Specifically, instead of learning a mask-conditional generative model, the generative process is conditioned by sampling from the given pixels during the reverse diffusion iterations. Therefore, the model is not trained for the inpainting task itself and can better handle out-of-distribution masks. §.§ 360° Panorama Image Outpainting Unlike normal images, 360° images are subjected to equirectangular projection. As a result, all objects and layouts in the images are distorted to varying amounts depending on placement, more so nearer the top and bottom poles. The generated image has to not only maintain the distorted structure but also be visually plausible, with the two ends also needing to be wraparound-consistent. Some works <cit.> are focused on deterministic completion of 360° RGB images, with BIPS <cit.> further extending this to RGB-D panorama synthesis. In order to generate diverse results, SIG-SS <cit.> uses a symmetry-informed CVAE, while OmniDreamer <cit.> uses transformer-based sampling. Both require additional mechanisms to achieve this goal. For DDPM, every reverse diffusion step is inherently stochastic since it incorporates noise from a Gaussian distribution, giving diverse results. §.§ Latent Diffusion LDMs <cit.> can be trained on larger image scales because they perceptually compress images into a smaller latent space with lower diffusion cost. Given an image in RGB space, the encoder ℰ maps x into a latent representation z=ℰ(x)∈ℝ^h×w×c. In contrast to previous works <cit.> that relied on an arbitrary 1D ordering of the learned space z to model its distribution autoregressively, the inherent spatial structure of the image does not change but is downscaled during this process, with the decoder 𝒟 used for returning to higher resolutions. Two different kinds of regularization, KL-reg and VQ-reg, were experimented with in <cit.>. In our work, we chose to use the VQ model, which uses a vector quantization layer <cit.> within the decoder. As stated in <cit.>, it can be interpreted as a VQGAN <cit.> but with the quantization layer absorbed by the decoder. § METHODS Given a 360° image x∈ℝ^H× W× C, degraded by a number of missing pixels to become a masked image x_m, our main goal is to infer semantically meaningful content for the missing regions, while simultaneously generating visually realistic appearances. This task is conceptually similar to conventional learning-based image inpainting, but this setting faces greater challenges due to the following three differences: 1) our output is a 360° panorama image with wraparound consistency, rather than a typical NFoV image; 2) the outpainting mask is significantly larger than those used in traditional inpainting; 3) our goal is to generate multiple appropriate objects within a scene, instead of simply replacing objects with generic background. To address these challenges, we propose a novel framework, called . As depicted in Fig. <ref>(a), the training stage starts with two branches for RGB and depth. In each branch, the input is embedded into the latent space prior to the discrete layer in the corresponding pre-trained VQ model, following <cit.>. These representations are then combined to form z_rgbd, which undergoes diffusion to obtain z_T. The resulting z_T is inversely denoised back to the original latent domain through a trained UNet+attention structure. Finally, the pre-trained decoder is employed to rebuild the full RGB-D results. During inference, our system takes a masked RGB image, and conducts panoramic outpainting. Note that our proposed model does not require harder-to-acquire depth maps as input, needing only a noise map (Fig. <ref>(b)). The output is then super-resolved into the final image in a refinement stage (Fig. <ref>(c)). §.§ Latent Diffusion Outpainting As mentioned, current diffusion-based inpainting methods <cit.> are often restricted to small image sizes, typically up to 256×256. Additionally, these approaches do not ensure wraparound consistency during completion, which is crucial for 360° panoramas. Finally, they do not work well for producing multiple objects within large masks. In order to perform our task on 512×1024 panoramas, we extend RePaint <cit.> to latent space outpainting. This is possible because the partially visible regions are not changed during perceptual image compression. Note that the 360° wraparound consistency is still preserved in both the pixel and latent domains, which is important for our setting. To further ensure such a wraparound consistency, a rotational outpainting mechanism is introduced in Sec. <ref>. The diagram of our latent diffusion outpainting method is shown in Fig. <ref>. Let x denote the original visible image, while m⊙x and (1-m)⊙x represent the missing and visible pixels, respectively. The latent input z is then defined as z=ℰ_θ((1-m)⊙x). In the completion task, we expect the model to generate plausibly reasonable content for the missing regions, while preserving the visible information as much as possible. Therefore, we add a step-dependent amount of Gaussian noise to the known regions, while denoising the previous latent vector for one step. To combine them, the mask is first downscaled to the latent vector size m_d, with the noised and denoised results then processed by the mask and inverse mask respectively. For each outpainting step, the process can be described by the following expressions: z^known_t-1∼q(z_t|z_t-1), z^unknown_t-1∼p_θ(z_t-1|z_t), z_t-1=m_d⊙z^known_t-1+(1-m_d)⊙z^unknown_t-1. Here, q is the forward distribution in the diffusion process and p_θ is the inverse distribution. After T iterations, z_0 is restored to image space using the pre-trained VQ decoder. §.§ Camera-rotation and Two-end Alignment Mechanism for 360° Panorama Since 360° panoramas are meant to be wraparound consistent, we apply a circular shift data augmentation, called camera-rotation, to the panorama image dataset (examples shown in Fig. <ref>), to enhance the model's performance. In particular, we randomly select a rotation angle, and use it to crop and re-stitch the patch to produce a new panorama. While camera-rotation may improve the model's implicit understanding of the expected wraparound consistency by providing a large number of data-augmented examples, it still does not impose strong enough constraints on wraparound alignment of the results. Therefore, in the inference processing, we propose a novel two-end alignment mechanism that can be naturally combined with our latent diffusion outpainting process. The denoising process of DDPM consists of a number of iterations, rather than a single step. During each iteration, we apply the camera-rotation operation to rotate both the latent vectors and masks by 90°, before performing an outpainting step. This procedure more effectively connects the two ends of the panorama from the previous step, which encourages the model to take into account the condition that the two ends are actually connected and generate aligned ends. Instead of changing the size of the images, generating overlapping content, or introducing extra loss functions, we provide `hints' to the model by rotating the panorama horizontally, thus enhancing the effect of alignment at both ends (examples shown in Fig. <ref>). §.§ Bi-modal Latent Diffusion Model In order to introduce depth information to aid RGB generation, perhaps a simple idea would be to use depth information as an explicit condition during training and inference. The depth information may be compressed into latent space and then introduced into the denoising process of the RGB images via cross-attention. However, through experiments, we have found that such an approach often leads to blurry results (Fig. <ref>). Meanwhile, using two parallel LDMs to reconstruct depth and RGB images separately, together with a joint loss, may also appear to be an intuitive solution. However, this idea is difficult to implement due to the computational resource requirements of multiple LDMs. Therefore we designed a bi-modal latent diffusion structure to introduce depth information while generating high-quality RGB output, but which is needed only during training. Specifically, we trained two separate VQ models for RGB and depth images, and then concatenate z_rgb∈ℝ^h×w×3 with z_depth∈ℝ^h×w×1 at the latent level to get z_rgbd∈ℝ^h×w×4. The training of VQ models are exactly the same as in LDM with downsampling factor f=4. Then we follow the standard process to train an unconditional DDPM with z_rgbd via a variant of the original LDM loss: L_RGB-D LDM := 𝔼_z_rgbd, ϵ∼𝒩(0,1),t[‖ϵ-ϵ_θ(z_t,t)‖_2^2], z_rgbd= Cat(ℰ_1(x_rgb);ℰ_2(x_depth)) Reconstructed RGB-D images can be obtained by decoupling z_rgbd and decoding. It is important to note that during training, we use the full RGB-D image as input, without masks. Conversely during the inference stage, the model can perform outpainting of the masked RGB image directly without any depth input, with the fourth channel of z_rgbd replaced by random noise. §.§ RefineNet Although mapping images to a smaller latent space via an autoencoder prior to diffusion can save training space and thus allow larger size inputs, the panorama size of 512×1024 is still a heavy burden for LDM <cit.>. Therefore, we adopt a two-stage approach to complete the outpainting task. Initially, the original input is downscaled to 256×512 as the input to the LDM. Correspondingly, the image size of the LDM output is also 256×512. Therefore, an additional module is needed to upscale the output image size to 512×1024. Traditional interpolation methods often lead to blurry results. Also, since panorama images are distorted and the objects and layouts do not follow the regular image patterns, we trained a super-resolution GAN model specifically for panoramas, in order to produce visually plausible results at a higher resolution. § EXPERIMENTS §.§ Experimental Details Dataset. We estimated our model on the Structured3D dataset <cit.>, which provides 360° indoor RGB-D data with a 512×1024 resolution. We split the dataset into 16930 train, 2116 validation, and 2117 test instances. Metrics. Due to large masks, we should not require the completed image to be exactly the same as the original image, since there are many plausible solutions (new furniture and ornaments, and their placement). Therefore, we mainly report the following dataset-level metrics: 1) Fréchet Inception Distance (FID) <cit.>, 2) Spatial FID (sFID) <cit.>, 3) density and coverage <cit.>. FID compares the distance between distributions of generated and original images in a deep feature domain, while sFID is a variant of FID that uses spatial features rather than the standard pooled features. Additionally, density reflects how accurate the generated data is to the real data stream, while coverage reflects how well the generated data generalizes the real data stream. Mask Types. Most works focused on generating omnidirectional images from NFoV images (Fig. <ref>(a)). However, partial observability may also occur due to sensor damage in 360° cameras. Such masks can be roughly simulated by randomly sampling a number of NFoV camera views within the panorama (Fig. <ref>(b)). We also experimented with other types of masks, such as randomly generated regular masks (Fig. <ref>(c)). Finally, the regions with floors and ceilings in panoramic images are often less interesting than the central regions. Hence we also generated layout masks which muffle all areas except floors and ceilings, to more incisively test the model's generative power (Fig. <ref>(d)). Baseline Models. We mainly compared with the following state-of-the-art methods: including inpainting models LaMa <cit.>_WACV'2022 and TFill <cit.>_CVPR'2022, panorama outpainting models BIPS <cit.>_ECCV'2022 and OmniDreamer <cit.>_CVPR'2022. All models are retrained on the Structured3D dataset using their publicly available codes. Implementation Details. To verify the auxiliary effect of depth training input on RGB image generation, we trained two different versions of IPO-LDM, 1) (RGB) and 2) (RGB-D). The RGB-D version follows Fig. <ref>, while the RGB version excludes depth altogether. §.§ Main Results Table <ref> shows the quantitative comparison of RGB panorama outpainting on the Structured3D dataset with camera masks. For a fair comparison, all models are evaluated without depth maps during testing. As can be seen, the proposed model significantly outperforms all state-of-the-art models. Specifically, the FID score is substantially better (relative 67.0% improvement). The effectiveness is also clearly visualized in Fig. <ref>. For BIPS <cit.> and OmniDreamer <cit.>, the generated areas hold obvious gaps to the original visible regions. As for LaMa <cit.> and TFill <cit.>, they output blurry results for large invisible areas. Compared to them, our produces more natural results in terms of transition, as well as more realistic floor texture and a more logical and clearer sofa in the central area. Comparing the RGB and RGB-D versions of our , in Fig. <ref>(c), there are some step-like artifacts on the ground in the RGB version. In contrast, the same region of RGB-D result (Fig. <ref>(d)) appears more structurally appropriate. The transitions between the items, walls, ceiling, and floor are also more natural. Such improvement proves the advantages of jointly learning to synthesize depth data along with RGB images, even when depth is not used during test time. §.§ Ablation Experiments We ran a number of ablations to analyse the effectiveness of each core component in our . Results are show in Tables <ref>, <ref>, and <ref>, and Figs. <ref> and <ref>. Depth Maps. We first evaluated the importance of depth maps in panoramic outpainting, reported in Table <ref>. We also compared with the state-of-the-art BIPS <cit.>, as it was also trained with RGB-D images. Besides, we have also considered the depth-conditioned LDM, mentioned in Section <ref>. However, it always produced blurry results (Fig. <ref>), so we leave out its quantitative results. As can be seen, BIPS's performance appears to deteriorate significantly when the input depth visual area is reduced. Conversely, our is not sensitive to these depth maps, indicating that the generic model has successfully handled the modality. Interestingly, we noticed that having fully visible depth at test time did not improve the performance of our , and in fact the result deteriorated slightly. A reasonable explanation for this situation is that during the training process, the signal-to-noise ratios (SNR) of RGB and depth pixels are roughly the same within each iteration, since no masks were used. However, during outpainting, the SNR balance will be disrupted when RGB input is masked and depth input is fully visible. Therefore, the results are degraded, but only slightly because has effectively learnt the distribution of spatial visual patterns across all modalities, without being overly reliant on depth. This also explains why our model is more robust to depth inputs with different degrees of visibility. Mask Types. As described in previous sections, our model is trained unconditionally, with masks only used during the inference phase. Therefore, our model is supposed to be able to handle a greater range of mask types, with performance less affected by the specific mask shape, and mainly by the area of the mask. To prove this, we tested and compared the performance of our model and the baseline models under four different mask types, as described previously. As the performance of BIPS varies considerably depending on whether depth is visible or not, we listed both its performance when depth is partially visible (with depth), and when it is not visible at all (w/o depth). The results are shown in Table <ref>, which shows that both RGB and RGB-D IPO-LDM significantly outperform the baseline models for all types of masks, while RGB-D IPO-LDM achieved the best results. This indicates that LDM is able to perform outpainting of panoramas best, and also proves that the introduction of depth information significantly improves the quality of the generated RGB panorama. Conversely, the performance of baseline models can vary considerably between mask types. For BIPS, although the difficulty between outpainting with camera masks and NFoV masks is not significant, the performance on camera masks is significantly better. This is likely due to BIPS using camera masks in the training process. In contrast, IPO-LDM has a more robust performance, producing high-quality and diverse output images for all mask distributions. Two-end Alignment. Currently, there is no corresponding quantitative metric to evaluate the performance of aligning the two ends of an image. To make a more reasonable comparison, we make one side of the input image be fully visible, and the other side fully masked. Then compare the output with/without our rotational outpainting using the same model. To compare as many results, we only show the end regions that are stitched together to highlight the contrast. The same tests were also performed on BIPS <cit.> and OmniDreamer <cit.>. The comparison results are shown in Fig. <ref>. They show that the consistency of the two ends of the results is improved after the use of rotational outpainting, especially the texture of the walls and the alignment of the layout. Still, differences can be found with rotated outpainting. We believe it is mainly due to the fact that rotational denoising is based on the latent level, which may introduce extra errors during decoding. RefineNet. As described in Section <ref>, we trained a super-resolution model to increase the resolution of the input from 256×512 to 512×1024 as the second stage of our framework. Quantitative results on camera and NFoV masks are shown in Table <ref>. The results show an overall increase in performance. § DISCUSSION Experimental results show that our model significantly outperforms baseline models on the 360° indoor RGB panorama outpainting task. Nevertheless, we consider several aspects of the model that can be further improved. Depth Outpainting. We believe that IPO-LDM not only performs well for RGB outpainting but can also be used for depth image outpainting. The architecture we designed should in theory also be able to naturally synthesize RGB-D panoramas. In our experiments, however, depth generation did not perform well and contained non-negligible noise. From our analysis, we believe the reason for this issue is that the VQ model for depth modeling is not robust enough. Since the depth datasets used to train the LDM and VQ models are the same, latent depth can provide accurate information to assist RGB generation during training. However, during inference, the VQ model may not be able to accurately compress test depth input into the latent space. This may also explain why our model is inferior to BIPS in terms of coverage and density under the fully visible depth condition, as the depth information is not fully exploited. We hope that this problem can be solved in our subsequent work, which will lead to the simultaneous generation of consistent RGB-D panoramas. Manipulable Panorama Outpainting. Even though our model is capable of generating diverse plausible results, it will be more meaningful to have a manipulable generative process. We will investigate such capabilities in the future. By using some simple prompts as conditions, the user can manipulate the generation of results, which enhances the usability of the model. § CONCLUSION In this paper, we show that our proposed method, the two-stage RGB-D IPO-LDM, achieves state-of-the-art performance for indoor RGB panorama outpainting. The introduction of depth information via our bi-modal LDM structure significantly improves the performance of the model. Such improvement illustrates the effectiveness of using depth during training as an aid to guide RGB panorama generation. In addition, we show that the alignment mechanism we employ at each step of the denoising process of the diffusion model enhances the wraparound consistency of the results. With the use of these novel mechanisms, our two-stage structure is capable of generating high-quality RGB panoramas at 512×1024 resolution. ieee_fullname
http://arxiv.org/abs/2307.01980v3
20230705014324
The existence of distinguishable bases in three-dimensional subspaces of qutrit-qudit systems under one-way local operations and classical communication
[ "Zhiwei Song", "Lin Chen", "Dragomir Z. Djokovic" ]
quant-ph
[ "quant-ph" ]
definitionDefinition proposition[definition]Proposition lemma[definition]Lemma algorithm[definition]Algorithm fact[definition]Fact theorem[definition]Theorem corollary[definition]Corollary conjecture[definition]Conjecture postulate[definition]Postulate axiom[definition]Axiom remark[definition]Remark example[definition]Example ⊓⊔ 501em=0pt=0 501em =0pt=0 [ αβ̱γδ̣ϵεζηθıικ̨łλμνξπρ̊στῠφχ̧ψøωΓΔΘŁΛΞΠΣΥΦΨØΩ A B C D E F G H I J K L M N O P Q R S T U V W X Z A B C D E F G H I J K L M N O P Q R S T U V W X ZÂB̂ĈD̂ÊF̂ĜĤÎĴK̂L̂M̂N̂ÔP̂R̂ŜT̂ÛV̂ŴX̂Ẑ𝔸𝔹ℂℝ𝔽 u CD CD_R SCD SCD_R OCD SOCD rk rk_R srk srk_R rrk rsrk grk trk ark brk sbrk ork sork birank CMI Co CPS CSS CSD diag Dim EPR EV GHZ GHZg GH GZ HZ I iso span Loc LOCC LUmax maxmin min mspec per PPT pr PRO rank SD sgn SEP SLOCC SLR sr supp Tr W werner X_X_^̊X_X_^̊⊗†∅⟨⌊←⇐↔⇔⊕⊗⟩→⇒⌋⊂⊆∖∧ []zhiweisong@buaa.edu.cn School of Mathematics and Systems Science, Beihang University, Beijing 100191, China[]linchen@buaa.edu.cn (corresponding author)School of Mathematics and Systems Science, Beihang University, Beijing 100191, ChinaInternational Research Institute for Multidisciplinary Science, Beihang University, Beijing 100191, China.6ex to 0pt -.23ex"16D[]dragomir@rogers.comDepartment of Pure Mathematics and Institute for Quantum Computing, University of Waterloo, Waterloo, Ontario, N2L 3G1, Canada We show that every three-dimensional subspace of qutrit-qudit complex or real systems has a distinguishable basis under one-way local operations and classical communication (LOCC). In particular this solves an open problem proposed in [J. Phys. A, 40, 7937, 2007]. We construct a three-dimensional space whose locally distinguishable basis is unique and apply the uniqueness property to the task of state transformation. We also construct a three-dimensional locally distinguishable multipartite space assisted with entanglement. On the other hand, we show that four-dimensional indistinguishable bipartite subspaces under one-way LOCC exist. Further, we show that the environment-assisted classical capacity of every channel with a three-dimensional environment is at least log_2 3, and the environment-assisting classical capacity of any qutrit channel is log_2 3. We also show that every two-qutrit state can be converted into a generalized classical state near the quantum-classical boundary by an entanglement-breaking channel. 03.65.Ud, 03.67.Mn, 03.67.-a The existence of distinguishable bases in three-dimensional subspaces of qutrit-qudit systems under one-way local operations and classical communication Dragomir Žoković August 1, 2023 ======================================================================================================================================================== Introduction.- Quantum nonlocality has been widely regarded as one of the fundamental properties of quantum mechanics. Nonlocality can be manifested by Bell inequalities <cit.>, quantum entanglement <cit.>, the indistinguishability of multipartite states under local operations and classical communications (LOCC) <cit.>, the construction of uniform states in heterogeneous systems <cit.> and strongly nonlocal unextendible product bases <cit.>. These physical phenomena and resources have been the key ingredients in applications such as quantum computing, data hiding and secret sharing. Further, the asymptotic LOCC discrimination of bipartite states is related to the Chernoff distance <cit.>. It is known that every two-dimensional subspace of arbitrary multipartite system has a locally distinguishable orthonormal basis <cit.>. The same conclusion holds for every qubit-qudit subspace <cit.>. In contrast, finding such a basis in higher-dimensional spaces turns out to be a hard problem. Watrous proved the existence of bipartite subspaces having no distinguishable basis under LOCC <cit.>. The currently known minimal dimension of a bipartite indistinguishable subspace under LOCC is seven <cit.>. The main problem so far is whether every three-dimensional subspace of qutrit-qudit systems has a basis which is distinguishable under one-way LOCC <cit.>. Note that one-way LOCC requires a fixed ordering of the actions by the parties. More specifically, suppose Alice and Bob share a combined quantum system, we say that an orthonormal basis is distinguishable under one-way LOCC if measurements are made first by Alice and the results are sent to Bob through classical communications, who can select a measurement to distinguish the basis. A subspace is called indistinguishable under one-way LOCC if such a basis does not exist. The problem mentioned above has attracted attention in the past years <cit.>. In this paper, by using some well-known facts from differential topology, we give a positive answer to the problem stated above, but a negative answer if the dimension of the subspace becomes four. We show that the locally distinguishability of bipartite system can be extended to multipartite system assisted with entanglement. We also investigate the uniqueness property of the locally distinguishable basis and apply the property to the task of state transformation. Locally distinguishable and indistinguishable subspaces play an important role in the study of classical corrected capacity of quantum channels, which is defined as the best classical capacity one can achieve when the receiver of the noisy channel can be assisted with a friendly environment through LOCC <cit.>. In a short word, measurements on the environment and system can help to recover the input information from the channel. The classical corrected capacity of any quantum channel is at least one bit of information according to the result of <cit.>. It was later shown that the classical corrected capacity of any rank-two quantum channel is log_2 d, where d is the dimension of the input space <cit.>. On the other hand, there exist quantum channels with classical corrected capacity less than log_2 d<cit.>. Our results imply that by first measuring on the environment, the environment-assisted classical capacity of every channel with a three-dimensional environment is at least log_2 3. In addition, by first measuring on the system, the environment-assisting classical capacity of every qutrit channel is log_2 3. Next, we apply our result to investigate the nonlocality without entanglement in terms of the so-called generalized classical state in the quantum-classical boundary <cit.>. We show that every two-qutrit state can be converted into a generalized classical state near the quantum-classical boundary by a local entanglement-breaking channel from the side of system Alice (or Bob). Further, if the other side, namely the system Bob (or Alice) is in the maximally mixed state then the generalized classical state becomes a classical state. So the quantumness of both systems can be removed by a local operation by Alice (or Bob) only. The full classicality also implies the tasks of deterministic local broadcasting <cit.>. Description and proof of the K-M conjecture.- Let =_A_B be the bipartite Hilbert space with _A=m,_B=n. Let denote the space of Hermitian operators on . We partition each M∈ into m^2 square blocks M_ij of order n. For each M∈, we set M_A:=∑_j=1^n ⟨j|_B M |j⟩_B and M_B:=∑_j=1^m ⟨j|_A M |j⟩_A where {|j⟩_A} and {|j⟩_B} are the computational bases of _A and _B respectively. We introduce two vector subspaces of , namely _0:={M∈:M_B=0} and _00={M∈_0: M_A=0}. Note that M_B=∑_i=1^m M_ii. We refer to the M_ii as the diagonal blocks of M. We say that M∈ is a dd-matrix if all diagonal blocks M_ii are diagonal matrices. Denote by _00 the subspace of _00 consisting of all dd-matrices. We refer to a quantum state as a dd-state if the density matrix of the state is a dd-matrix. The inequality M≥ 0 means that M is positive semidefinite. By (n) we denote the group of unitary matrices of order n, and we refer to the direct product (m)×(n) as the local unitary group. The matrices in (n) with determinant 1 form the special unitary group (n). The subgroup (n) consists of all monomial matrices in (n). (A matrix is monomial if each row and each column has exactly one nonzero entry and all nonzero entries have modulus 1.) The action of (m) ×(n) on is defined by (U,V)· M := (U V)M(U V)^†. Since the center of (m) ×(n) acts trivially on , the orbits of (m) ×(n) in are the same as those of (m)×(n), and we refer to them as LU-orbits. Two matrices M_1,M_2∈ are LU-equivalent if they belong to the same LU-orbit, i.e., (U,V)· M_1=M_2 for some (U,V)∈(m)×(n). Assume now that m=n=3. Thus for M∈ we have M=[M_ij], i,j=1,2,3. Let _1:={M∈:M_B=I_3}. The following conjecture was proposed by King and Matysiak <cit.>. Let us state their mathematical formulation of the conjecture to which we shall refer as the K-M conjecture. K-M conjecture: If M∈_1 and M≥0, then M is LU-equivalent to a dd-matrix. Now we present the main result of this paper. The detailed proof will be given in Appendices A and B. The K-M conjecture is true. Equivalently, any three-dimensional subspace of ^3⊗^n has an orthonormal basis which is distinguishable under one-way LOCC. For the equivalence of the mathematical formulation above and the original one in terms of local distinguishability see <cit.>. Here we point out that any positive semidefinite matrix M∈_1 can be associated with a three-dimensional subspace of ^3⊗^n with a chosen orthonormal basis and a specified measurement basis of ^3. Further, changing the orthonormal basis of the subspace induces a map M→ (I_3, V)· M for some V∈(3) and changing the measurement basis induces a map M→ (U,I_3)· M for some U∈(3). The method used in the proof of Theorem <ref> can be also applied to the real case. Thus we obtain the following result, a detailed proof of which will be given in Appendix C (Theorem <ref>). Any three-dimensional subspace of ^3⊗^n has an orthonormal basis which is distinguishable under one-way LOCC. Construction and application of unique locally distinguishable bases in multipartite systems.- We construct a three-dimensional subspace of ^3⊗^n whose one-way locally distinguishable basis is unique (ignoring the phase factors). As a contrast, any subspace of ^2⊗^n has infinitely many one-way locally distinguishable bases <cit.>. To be specific, let G=( [ 2/3 0 0 0 0 0 0 0 0; 0 1/√(3) 0 0 0 i/3 √(3) 2/3 √(3) 0 0; 0 0 √(2)/3 0 0 0 0 0 0; 0 0 0 1/√(3) 0 0 0 0 0; 0 0 0 0 √(2)/3 0 0 0 0; 0 0 0 0 0 √(11)/3√(3) 3+2 i/3√(33) 0 0; 0 0 0 0 0 0 1/√(33) 0 0; 0 0 0 0 0 0 0 2/3 0; 0 0 0 0 0 0 0 0 1/√(3); ]), and |Ψ_1⟩=|1⟩⊗|r_1⟩+|2⟩⊗|r_4⟩+|3⟩⊗|r_7⟩, |Ψ_2⟩=|1⟩⊗|r_2⟩+|2⟩⊗|r_5⟩+|3⟩⊗|r_8⟩, |Ψ_3⟩=|1⟩⊗|r_3⟩+|2⟩⊗|r_6⟩+|3⟩⊗|r_9⟩, where |r_i⟩ is the i-th column vector of G. A calculation shows G^† G=1/9H_0+1/3I_9 where H_0 is given in (<ref>). Hence G^† G is a dd-matrix and (G^† G)_B=I_3. This implies that {|Ψ_1⟩,|Ψ_2⟩,|Ψ_3⟩} is a distinguishable basis under one-way LOCC. Using Lemma <ref> in Appendix B, one can verify that (U, V)· (G^† G) is a dd-matrix only if U,V∈(3). We obtain that the subspace spanned by |Ψ_i⟩'s contains no other locally distinguishable basis. Using any multipartite state |⟩, one can further construct the multipartite orthonormal basis |_1⟩⊗|⟩,|_2⟩⊗|⟩,|_3⟩⊗|⟩, which is a unique one-way locally distinguishable basis in the multipartite space. It is straightforward to see that the three states |Ψ_1⟩, |Ψ_2⟩, and |Ψ_3⟩ are equivalent under local unitary equivalence, and each of them has entanglement approximately 1.53 ebits of the von Neumann entropy of _A_1. Further, by calculation one can see that every state in the span of the three states has entanglement more than 1.52 ebits. Hence, the entanglement of formation (EOF) of every mixed state whose range is contained in the span is also more than 1.52 ebits <cit.>. We don't know whether this EOF is exactly equal to that of the pure state |Ψ_1⟩. Next, we apply the uniqueness property of some one-way locally distinguishable bases to the task of state transformation under LU-equivalence <cit.>. We say that two n-partite states and $̱ are LU-equivalent when there is a LU gateU=⊗^n_j=1U_jsuch that=UU̱^. The LU-equivalent states have common properties useful for quantum-information processing, because they can be locally prepared from each other. Due to the great number of parameters, it is usually not easy to determine whether two states are LU-equivalent. Due to the uniqueness property proven byU,V∈(3), one can see that the non-normalized bipartite stateG^ Gis not LU-equivalent to any two-qutrit dd-stateρ, which is LU-equivalent to another dd-state viaU, V ∉(3). Such a stateρcan be chosen asρ=[M_ij]with all blocks being the diagonal matrices, e.g., the generalized classical states <cit.>. By a similar reason, determining the LU-equivalence of two multipartite states is likely when one of them has a bipartite reduced density operator with the uniqueness property. Discrimination of multipartite spaces.- The above results can be extended to multipartite systems assisted with entanglement. Taking the tripartite system as an example, suppose Alice, Bob and Charlie share a combined system of^3⊗^m⊗^n. By Theorem <ref>, any three-dimensional subspacehas an orthonormal basis|η_1⟩,|η_2⟩,|η_3⟩written as|η_i⟩=∑_j=1^3 |a_j⟩_A⊗|ρ_i,j⟩_BC, where|a_j⟩is an orthonormal basis ofA-system and, for each fixedj, the three bipartite states|ρ_i,j⟩_BCinBC-system are orthogonal. Alice measures her system by the basis{|a_1⟩,|a_2⟩,|a_3⟩}and tells the result to Charlie through classical communications. Bob teleports his particle to Charlie by usinglog_2mentanglement and classical communications so that Charlie owns|ρ_i,j⟩_BC. Because the|ρ_i,j⟩_BC's are orthogonal to each other, they are distinguishable by Charlie. The above process can be extended to any number of systems as follows. Suppose A_1,⋯,A_n share an n-partite system of ^3⊗^d_2⊗⋯⊗^d_n with n≥ 2. Then any three-dimensional subspace is locally distinguishable assisted with log_2 (d_2...d_n-1) ebits as the entanglement of a d_2...d_n-1-level maximally entangled state, as well as one-way classical communications from A_1,⋯, A_n-1 to A_n. Existence of bipartite four-dimensional indistinguishable subspaces under one-way LOCC.- We shall prove the following theorem. There exist four-dimensional indistinguishable bipartite subspaces under one-way LOCC. Let _A=3,_B=4. Each M∈ is partitioned into 9 square blocks of order 4. Since the dimensions of the manifolds (3)×(4)×_00 and _00 are 119 and 120 respectively, the map f:(3)×(4)×_00→_00 defined by f(X,Y,Z)=(X, Y)· Z is not onto. Let us choose a matrix K_0∈_00 which is not in the image of f, i.e., K_0 is not LU-equivalent to any dd-matrix. Next, we choose a small ϵ>0 such that K:=1/3I_12 + ϵ K_0≥ 0. Moreover, K is not LU-equivalent to any dd-matrix. Further, K=P^† P for a matrix P who has 12 columns. We now define four bipartite states |Φ_k⟩= |1⟩|p_k⟩+ |2⟩|p_k+4⟩+|3⟩|p_k+8⟩, for k=1,2,3,4, where |p_k⟩ is the k-th column vector of P. One can verify that K_B=I_4 and thus these four states are orthonormal. Since K is not LU-equivalent to any dd-matrix, we deduce that the subspace spanned by |Φ_i⟩'s is indistinguishable under one-way LOCC. Application to classical capacity of quantum channels.- Any quantum channelΦcan be viewed as arising from a unitary interactionUbetween the systemand the environment. The unitary operatorUmaps the orthogonal input states to orthogonal ones in⊗. Specifically, we can write Φ(ψ)=Tr_[U(ψ⊗ϵ)U^†], where|ϵ⟩is the initial state of the environment and the partial trace is taken over the environment. However, the output of the system may not be orthogonal after tracing out the environment, and thus cannot be distinguished perfectly. It is possible to more reliably distinguish output states of a noisy quantum channel by using measurements on the environment or on the system. This idea of enhancing the channel corrected capacity has been considered in a number of settings <cit.>. The two notions environment-assisted and environment-assisting were introduced in <cit.>. Specifically, assume that the input state|ψ⟩in (<ref>) varies over the system space, the stateU(|ψ⟩⊗|ϵ⟩)varies over a subspaceof⊗, wheredenotes the environment space. Supposehas a basisU(|ψ_i⟩⊗|ϵ⟩)that can be distinguished using one-way LOCC, then we can encode classical information in the system states|ψ_i⟩and completely recover the information by measuring the environment (resp. system) and followed by a selected measurement on the system (resp. environment). In this setting, we say that the classical environment-assisted (resp. environment-assisting) capacity of the channel islog_2 d, wheredis the dimension of. For a channel with a three-dimensional environment, the dimension ofis three. For a qutrit channel, the dimension ofis three. The results stated in the next corollary follow from Theorem <ref>. The environment-assisted classical capacity of every channel with a three-dimensional environment is at least log_2 3. The environment-assisting classical capacity of any qutrit channel is log_2 3. Application to quantum-classical boundary.- The study of nonlocality without entanglement in terms of correlations such as discord has attracted attention in the past decades <cit.>. The nonlocality can be manifested by the multipartite states lying near the quantum-classical boundary, namely the so-called generalized classical and classical states <cit.>. For convenience we shortly review them as follows. Let{|ϕ(i⃗)⟩}={|ϕ^(1)_i_1ϕ^(2)_i_2…ϕ^(N)_i_N⟩}be a basis of product states. It is known that a multipartite separable state$̊ is a convex sum of product states. We refer to $̊ as a generalized classical (resp. classical) state for the k^th system if$̊ is diagonal in a product state basis and the |ϕ^(k)_i_k⟩ are linearly independent (resp. orthonormal). Further, the state ρ is fully generalized classical (resp. fully classical) if it is diagonal in every system with a linearly independent (resp. orthonormal) basis. The generalized classical and classical states can be both efficiently detected by using existing semidefinite programming (SDP) <cit.>. Given any two-qutrit state acting on a system _AB, by using Theorem <ref> we deduce that there exists an orthonormal basis {|a_j⟩} of system A such that α=∑^3_i,j=1|a_i⟩⟨a_j|⊗ M_ij where M_11, M_22,M_33 are simultaneously congruent to diagonal matrices. Hence there exists an entanglement breaking channel Ł:=̱∑^3_j=1(a_j⊗ I_3)(a_j⊗ I_3), such that $̱ is a classical state w.r.t. the systemAand a generalized classical state w.r.t. the systemB. In particular if_Bis the maximally mixed state then$̱ becomes a fully classical state. In this sense, the quantumness of both of two distant systems can be totally removed by a local operation of only one system. Further, the full classicality is related to the tasks of deterministic local broadcasting and deterministic non-disruptive local state identification <cit.>. In addition, the length of a generalized classical state equals its rank, and it represents the minimum cost of generating the state <cit.>. A similar argument may be extended to the multipartite system. Summary and outlook.- We have shown that a distinguishable basis under one-way LOCC exists in any three-dimensional subspace of qutrit-qudit real or complex systems. We have applied the results to quantum information issues such as the locally distinguishability of multipartite spaces, the corrected capacity of quantum channels, and the quantum-classical boundary. There are several questions arising from this letter. The first question is how to construct analytical expressions for such a basis. Another question is whether any bipartite three-dimensional subspace is distinguishable under one-way LOCC. We conjecture that the answer is yes. This would imply that the environment-assisted classical capacity of every qudit quantum channel is at least log_2 3. The corresponding mathematical conjecture asserts the following: if M≥ 0 is a matrix of order 3m(m≥ 4), partitioned into m^2 blocks M_ij of order 3, with M_B=I_3, then M is LU-equivalent to a dd-matrix. Finally, we have shown the existence of bipartite four-dimensional indistinguishable subspaces under one-way LOCC. However, the question: does such an indistinguishable subspace exist under a general LOCC protocol, remains open? We thank Li Yu and Nengkun Yu for useful comments. ZWS and LC were supported by the NNSF of China (Grant No. 11871089). unsrt § APPENDIX A: THE PROOF OF K-M CONJECTURE We first prove the following Lemma. Each of the following three assertions is equivalent to the K-M conjecture: (i) each matrix in _0 is LU-equivalent to a dd-matrix; (ii) each matrix in _1 is LU-equivalent to a dd-matrix; (iii) each matrix in _00 is LU-equivalent to a dd-matrix. K-M conjecture⟹ (i): Let M_0∈_0 be arbitrary. Choose small >0 such that M_1:=I_9/3+ M_0>0. By the K-M conjectureM_1 is LU-equivalent to a dd-matrix. Hence, M_0=^-1(M_1-I_9/3) is also LU-equivalent to a dd-matrix. (i) ⟹ (ii): This is true because _1=_0+I_9/3. The implications (ii) ⟹ K-M conjecture and (i) ⟹ (iii) are trivial. (iii) ⟹ (ii): Let M∈_0 be arbitrary. Then N:=M-M_A I_3/3 ∈_00. By (iii), N is LU-equivalent to a dd-matrix. Hence, the same is true for M=N+M_A I_3/3. Proof of K-M conjecture In view of Lemma <ref>, it suffices to prove that each matrix in _00 is LU-equivalent to a dd-matrix. Recalling the definition of _00, we have to prove that the map f: (3)×(3)×_00→_00 defined by f(U,V,H)=(U,V)· H is onto. Note that (3) ×(3) ×_00 and _00 are smooth manifolds of dimensions 8+8+52=68 and 64, respectively. Let _00 and _00 denote the real projective spaces associated with _00 and _00, respectively. For nonzero H ∈_00, we denote by [H] the 1-dimensional subspace of _00 viewed as a point of _00. Since _00 and _00 are real vector spaces and f is linear in H, it induces a smooth map ϕ: (3) ×(3) ×_00→_00, ϕ(U,V,[H]):=[f(U,V,H)]=[(U,V)· H]. To prove the theorem, it suffices to show that ϕ is onto. The manifolds (3) ×(3) ×_00 and _00 are compact with no boundary and have dimensions 67 and 63, respectively. We denote by Γ the subgroup (3)×(3) of (3)×(3). For convenience we shall write ∈ as the ordered pair (_1,_2) where _1,_2∈(3). One can verify that the subspace _00⊆_00 is Γ-invariant. Consequently acts on the manifold (3) ×(3) ×_00 as follows: ∙(U,V,H):= (U_1^†,V_2^†,(_1,_2)· H). This action of induces an action on the manifold (3) ×(3) ×_00 which we will denote by the same symbol. Thus we have ∙(U,V,[H]):= (U_1^†,V_2^†,[(_1,_2)· H]). It is straightforward to verify that this action of is free, which means that if ∈ fixes a point (U,V,[H]) then =(I_3,I_3). The corresponding quotient space (also known as the orbit space) :=( (3) ×(3) ×_00 )/ is also a smooth compact manifold, see e.g. <cit.> or <cit.>. We shall denote by (U,V,[H])^# the image in of a point (U,V,[H])∈(3)×(3)×_00. Further, the dimension of is 63. Since the map ϕ is smooth and constant on each Γ-orbit, it induces a smooth map ϕ^#: →_00, ϕ^#((U,V,[H])^#):=[(U,V)· H]. Let P=(U_0,V_0,[H_0]) ∈(3) ×(3) ×_00, where U_0=V_0=1/8 6 4 2√(3) -1 6 -3√(3) -3√(3) 2√(3) 5 ,  H_0= 1 0 0 0 0 0 0 0 0 0 0 0 0 0 i 2 0 0 0 0 -1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 -1 0 0 0 0 0 -i 0 0 0 1 1 0 0 0 2 0 0 0 1 -1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 , and let Q:=ϕ(P)=[(U_0,V_0)· H_0]. There is a nice parametrization <cit.> of (3) in terms of 8 angles: θ_i (i=1,2,3) and ϕ_i (i=1,2,…,5). Considering the map f defined in (<ref>), we used 16 angles (eight for each copy of (3)) as coordinates and computed the rank of the Jacobian matrix of f at the point (U_0,V_0,H_0). This rank is 64, and so this point is a regular point of f. Consequently, P is a regular point of ϕ, and P^# is a regular point of ϕ^#. We claim that the equation ϕ^#((U,V,[H])^#)=Q has only one solution, namely P^#. Equivalently, we claim that all solutions of the equation ϕ(U,V,[H])=Q are exactly all points of the -orbit of P. The equation ϕ(U,V,[H])=Q can be written as [(U,V)· H]=[(U_0,V_0)· H_0]. By multiplying H by a suitable nonzero real number, we may assume that the equality (U,V)· H=(U_0,V_0)· H_0 holds. Finally we can rewrite this equation as (U_0^† U, V_0^† V)· H=H_0. By using Lemma <ref> in Appendix B, we deduce that _1:=U_0^† U and _2:=V_0^† V are monomial matrices. It follows that (U,V,[H])=(U_0_1,V_0_2,[(_1^†,_2^†)· H_0])∈∙ P. Thus our claim is proved. This means that there is only one point on the manifold which satisfies the equation ϕ^#(∙ (U,V,[H]))=[(U_0,V_0)· H_0]. Hence the point [(U_0,V_0)· H_0] is a regular value of ϕ^#. Consequently the map ϕ^# is onto, see <cit.> or <cit.>. This implies that ϕ is also onto. § APPENDIX B: THE PROOF OF A NECESSARY LEMMA SUPPORTING THEOREM <REF> Our objective here is to prove the following lemma which was used in the proof of Theorem <ref>. Let us recall that P:=(U_0,V_0,[H_0])∈(3)×(3)×_00 where the matrices U_0,V_0,H_0 are specified in (<ref>), and that Q:=ϕ(P)=[(U_0,V_0)· H_0]. Under these assumptions, the equation ϕ(U,V,[H])=Q implies that (U,V)∈. A set of Hermitian matrices can be simultaneously diagonalized by a unitary matrix if and only if they commute with each other. Let us find all U∈(3) such that the three diagonal blocks of (U,I_3)· H_0 commute. We shall refer to such U as good matrices. Since the sum of these three blocks is zero, it suffices to make the first two blocks commute. We can write any U∈(3) as follows <cit.>, U:= u_11 u_12 u_13 u_21 u_22 u_23 u_31 u_32 u_33, where u_11=cosθ_1cosθ_2e^iϕ_1, u_12=sinθ_1e^iϕ_3, u_13=cosθ_1sinθ_2e^iϕ_4, u_21=sinθ_2sinθ_3e^-iϕ_4-iϕ_5-sinθ_1cosθ_2cosθ_3e^iϕ_1+iϕ_2-iϕ_3, u_22=cosθ_1cosθ_3e^iϕ_2, u_23=-cosθ_2sinθ_3e^-iϕ_1-iϕ_5 -sinθ_1sinθ_2cosθ_3e^iϕ_2-iϕ_3+iϕ_4, u_31=-sinθ_1cosθ_2sinθ_3e^iϕ_1-iϕ_3+iϕ_5 -sinθ_2cosθ_3e^-iϕ_2-iϕ_4, u_32=cosθ_1sinθ_3e^iϕ_5, u_33=cosθ_2cosθ_3e^-iϕ_1-iϕ_2 -sinθ_1sinθ_2sinθ_3e^-iϕ_3+iϕ_4+iϕ_5, where 0≤θ_1,θ_2,θ_3≤π/2, 0≤ϕ_1,ϕ_2,ϕ_3,ϕ_4,ϕ_5≤ 2π. It is easy to verify that U is a monomial matrix if {θ_1,θ_2,θ_3}⊆{0,π/2}. Denote by R=[r_ij] the product D_1D_2 and by S=[s_ij] the commutator D_1D_2-D_2D_1 of the first and the second diagonal blocks, D_1 and D_2 respectively, of the matrix (U,I_3)· H_0. Note that U is good if and only if S=0 or, equivalently, R is Hermitian. Since D_1 and D_2 are Hermitian matrices, S is skew-Hermitian. One can easily verify that s_33=0. Since S has trace 0, we have s_22=-s_11. From now on in this proof we assume that U is good and we will prove that U∈(3). For convenience we set ψ:=ϕ_1+ϕ_2-ϕ_3+ϕ_4+ϕ_5. Our first claim is that at least one of the three angles θ_i is equal to 0 or π/2. A calculation shows that s_11=3i/2cos^2 θ_1 sinθ_1 sin 2θ_2 sin 2θ_3 sinψ. Hence the claim holds unless sinψ=0. We may now assume that θ_1,θ_2 ∈ (0,π/2) and sinψ=0. Another calculation shows that sin(ϕ_1-ϕ_4) Re(r_12-r_21) + cos(ϕ_1-ϕ_4) Im(r_12+r_21) = -1/4cosθ_1 sin 2θ_1 sin 2θ_3 cosψ -sinθ_1 sin 2θ_3 (1-3cos^2 θ_1 sin^2 θ_2) sinψ. Since R is a Hermitian matrix the LHS vanishes, and since θ_1 ∈ (0,π/2) and sinψ=0 it follows that sin 2θ_3 =0. Hence our claim is true. Our second claim is that if two of the angles θ_i belong to {0,π/2} then so does the third, and so U∈(3). There are 12 cases to consider. We shall prove that the claim holds in the case θ_1=θ_2=0. The proofs in the other cases are similar and are omitted. By setting θ_1=θ_2=0 in S, a calculation shows that s_13=-sin 2θ_3 e^-i(ϕ_1+ϕ_2+ϕ_5). Since U is good, S=0 and we must have sin 2θ_3=0, i.e., θ_3 is also 0 or π/2. Hence U∈(3). Our third claim is that if at least one θ_i is 0 or π/2 then U∈(3). There are six cases to consider: θ_i=0 or θ_i=π/2, (i=1,2,3). We shall give the proofs for the two cases with i=1. We omit the proofs in the other four cases as they are similar. Suppose first that θ_1=0. By setting θ_1=0 in S, a computation shows that s_12=-sin 2θ_2 cos 2θ_3 e^i(ϕ_4-ϕ_1). On the other hand, by setting θ_1=0 and θ_3=π/4 in S we find that s_23=1/2sinθ_2 (i-2cos^2 θ_2) e^-i(ϕ_2+ϕ_4+ϕ_5). Since S=0, the above expressions for s_12 and s_23 imply that sin 2θ_2=0. Our second claim now shows that U∈(3). Next suppose that θ_1=π/2. By setting θ_1=π/2 in S, a computation shows that s_12=sin 2θ_2 cos 2θ_3 e^i(ϕ_4-ϕ_1) +cos^2θ_2 sin 2θ_3 e^-i(2ϕ_1+ϕ_2-ϕ_3+ϕ_5) -sin^2θ_2 sin 2θ_3 e^i(ϕ_2-ϕ_3+2ϕ_4+ϕ_5). After multiplying by e^i(ϕ_1-ϕ_4), we obtain that sin 2θ_2 cos 2θ_3 +sin 2θ_3 (e^-iψcos^2 θ_2 -e^iψsin^2 θ_2) =0. By taking the imaginary parts in this equation, we obtain that sin 2θ_3 sinψ =0. If sin 2θ_3 =0 then our second claim implies that U∈(3). We may assume that sinψ=0, and so cosψ=± 1. By taking the real parts in the above equation, we obtain that sin 2θ_2 cos 2θ_3 + cos 2θ_2 sin 2θ_3 cosψ=0. Suppose cosψ=1, then one of sin(θ_2+θ_3) and cos(θ_2+θ_3) equals to 0. Suppose cosψ=-1, then one of sin(θ_2-θ_3) and cos(θ_2-θ_3) equals to 0. In both cases it is easy to deduce from (<ref>) that U∈(3). It remains to prove that V ∈(3). Since U is good and monomial, the three diagonal blocks of (U,I_3)· H_0 are just a permutation of the three diagonal blocks of H_0. From (<ref>), we see that each of the three diagonal blocks of (U,I_3)· H_0 has 3 distinct eigenvalues. Consequently the three diagonal blocks of (U,V)· H_0 are diagonal matrices only if V∈(3). This completes the proof. § APPENDIX C: THE REAL K-M CONJECTURE AND ITS PROOF Here we consider the real analog of the K-M conjecture. Thus _A and _B will be real Hilbert spaces of dimension 3, and their tensor product =_A_B is taken over the reals, . In this section, denotes the space of real symmetric matrices of order 9. The subspaces _1, _0 and _00 of are defined in the same way as in the complex case. The local unitary group (3)×(3) is now replaced by the local orthogonal (LO) group (3)×(3). Its action on is given by the formula (A,B)· M:=(A B)M(A B)^T. Since the matrices (± I_3,± I_3) act trivially on , each (3)×(3)-orbit in is also an (3)×(3)-orbit. We refer to these orbits as the LO-orbits. Two matrices in are LO-equivalent if they belong to the same LO-orbit. Given a bipartite matrix M, we denote its partial transpose w.r.t. system B by M^_B, i.e., if M=∑^m_1_i=1∑^n_1_j=1|i⟩⟨j| M_i,j then M^_B=∑^m_1_i=1∑^n_1_j=1|i⟩⟨j| M_i,j^T. We introduce an additional subspace, _2. It is the subspace of _00 consisting of all matrices H such that H^_B=H. Equivalently, _2 consists of all matrices H∈_00 having all blocks H_ij symmetric. One can easily verify that all four subspaces: _1, _0, _00 and _2 are LO-invariant. The following conjecture is the real analog of the original K-M conjecture. We shall refer to it as the real K-M conjecture. In the real setting described above, if M∈_1 and M≥0, then M is LO-equivalent to a dd-matrix. As in the complex case, there are several equivalent formulations of this conjecture. We state four of them in the next lemma. Each of the following assertions is equivalent to the real K-M conjecture: (i) each matrix in _0 is LO-equivalent to a dd-matrix; (ii) each matrix in _1 is LO-equivalent to a dd-matrix; (iii) each matrix in _00 is LO-equivalent to a dd-matrix; (iv) each matrix in _2 is LO-equivalent to a dd-matrix. In the cases (i), (ii) and (iii) the equivalence can be proved in the same way as in the complex case. Obviously (iii) implies (iv). To prove the converse, assume that (iv) holds. Let N∈_00 and set M=N+N^_B. Since M^_B=M, (iv) implies that there exists (A,B)∈(3)×(3) such that (A,B)· M=(A,B)· N+(A,B)· N^_B is a dd-matrix. Since (A,B)· N^_B=( (A,B)· N )^_B, the matrices (A,B)· N and (A,B)· N^_B share the same diagonal blocks. As their sum is a dd-matrix, the same is true for (A,B)· N. Thus (iv) implies (iii). Denote by _2 the subspace of _2 consisting of all dd-matrices in _2. The real K-M conjecture is true. We have to show that the map f: (3) ×(3) ×_2 →_2 defined by f(X,Y,Z)=(X,Y)· Z is onto. Since _2 and _2 are real vector spaces and the map f is linear in Z, f induces a map ϕ: (3) ×(3) ×_2 →_2, where _2 and _2 are the real projective spaces associated with _2 and _2, respectively. It suffices to show that the map ϕ is onto. Note that (3) ×(3) ×_2 and _2 are compact smooth manifolds with empty boundaries. Moreover they have the same dimension, namely 24, and the map ϕ is smooth. For nonzero Z ∈_2 we denote by [Z] the 1-dimensional subspace of _2 viewed as a point of _2. Denote by S_4 the subgroup of (3) consisting of all monomial matrices in (3). It is isomorphic to the symmetric group of order 24. Let us also introduce the subgroup Γ:=S_4 × S_4 of (3) ×(3). One can easily verify that the subspace _2 ⊆_2 is Γ-invariant. Consequently, we can define an action of Γ on the manifold (3) ×(3) ×_2 as follows: (γ_1,γ_2)∙ (X,Y,[Z]):= (Xγ_1^T,Yγ_2^T,[(γ_1,γ_2)· Z]). It is easy to verify that this action is free. Hence each orbit of Γ has exactly |Γ|=576  (=24^2) points. We deduce that the quotient space :=( (3) ×(3) ×_2 )/Γ is also a smooth compact manifold of dimension 24, see e.g. <cit.>. Moreover the map ϕ induces a smooth map ϕ^#: →_2. Let P=(X_0,Y_0,[Z_0]) ∈(3)×(3)×_2 and let Q:=f(X_0,Y_0,Z_0)∈_2 where X_0=Y_0= 1 0 0 0 0 -1 0 1 0, Z_0= 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 1 0 0 -1 1 0 0 0 1 0 0 0 1 0 0 0 0 1 0 0 0 0 0 1 0 1 0 0 1 0 0 0 0 -1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 1 0 0 0 -2 0 0 1 0 0 0 0 0 0 2. We used six Euler angles (three for each copy of (3)) as coordinates and computed the Jacobian determinant of f at the point (X_0,Y_0,Z_0). It is not 0, and so this point is a regular point of the map f. Consequently, the point P is a regular point of ϕ. Further, the image of P in , i.e. the Γ-orbit of P is a regular point of ϕ^#. We shall now prove that ∙ P is the unique point of which satisfies the equation ϕ^#(∙ (X,Y,[Z]))=[Q]. This will prove that the point [Q] is a regular value of ϕ^# and so the map ϕ^# must be onto, see <cit.> or <cit.>. Hence ϕ will be onto too. It is immediate from the definition of the -action that the set f^-1(Q) is -invariant. In particular we have ∙ (X_0,Y_0,Z_0)⊆ f^-1(Q). Let (X_1,Y_1,Z_1)∈ f^-1(Q) be arbitrary. Our first claim is that X_1∈ S_4. We have (X_1,Y_1)· Z_1 = (X_0,Y_0)· Z_0 which we can rewrite as (I_3,Y_0^T Y_1)· Z_1=(X_1^T X_0,I_3)· Z_0. Let X_2:=X_1^T X_0 and M:=(X_2,I_3)· Z_0. We partition M into nine 3 by 3 blocks M_ij. Since X_0∈ S_4, it suffices to prove that X_2∈ S_4. The equation displayed above implies that M can be transformed into a dd-matrix by the action of (3) on the B-system only. Therefore the three diagonal blocks M_ii of M must commute with each other. In particular the commutator S:=M_11M_22-M_22M_11=[s_ij] must vanish. Since the blocks M_ii are symmetric matrices, S is skew-symmetric. Let us write the matrix X_2∈(3) as a function of the three Euler angles X_2= coscos -sincoss̱i̱ṉ -cossin -sincosc̱o̱s̱ sinsinsincos +coscoss̱i̱ṉ -sinsin +coscosc̱o̱s̱ -cossinsins̱i̱ṉ sinc̱o̱s̱ cos, where ,∈[0,2π] and β∈[0,π]. Then we compute the matrix M and the commutator S. One can easily verify that the solutions (,,̱) of this system in which ∈̱{0,π/2,π} give all 24 matrices X_2 in S_4, and nothing else. Thus we may assume that sinc̱o̱s̱0. The three equations s_12=0, s_13=0, s_23=0 can be written in the following form: 2cos 2cosc̱o̱s̱ (3-2cos^2) +sin 2sin (2cos^2c̱o̱s̱^2 -5cos^2+̱2cos^2 +1)=0, 2cos 2cos 2s̱i̱ṉ2 +sin 2cos(̱6cos^2 c̱o̱s̱^2 -5cos^2 -̱2cos^2 +3)=0, 2cos 2coss̱i̱ṉ (1+cos^2) +sin 2cos (cos^2c̱o̱s̱^2 +3cos^2+̱cos^2 -2)=0. We have omitted the factor sin$̱ from (<ref>) and (<ref>). Ifsin=0then it is easy to see that our claim follows from the above equations. Hence we assume from now on thatsin0. Since each of the three equations above is linear and homogeneous insin 2andcos 2, it follows that the matrix cosc̱o̱s̱ (3-2cos^2) sin(2cos^2c̱o̱s̱^2-5cos^2+̱2cos^2+1) cos 2s̱i̱ṉ2 cos(̱6cos^2c̱o̱s̱^2-5cos^2-̱2cos^2+3) coss̱i̱ṉ(1+cos^2) cos(cos^2c̱o̱s̱^2+3cos^2+̱cos^2-2) must have rank 1. By equating to 0 the minor not containing the middle row, and recalling thatcos0, we obtain the formula cos^2=̱1+2sin^2 2/4+sin^2+2sin^2 2. Finally, by equating to 0 the minor not containing the first row (and using this formula) we obtain the equation 2+3cos^2+10cos^4-14cos^6=0. As the LHS is equal to1+sin^2+sin^2 2+14sin^2cos^4, this equation has no solutions and our claim is proved, i.e.X_1∈ S_4. The proof thatY_1∈ S_4is similar and is omitted. Note that in any solution(X_1,Y_1,Z_1)of (<ref>) the matricesX_1andY_1determineZ_1uniquely. Consequentlyf^-1(Q)is a single-orbit. This implies that there is only one point of the manifoldwhich satisfies the equationϕ^#(Γ∙ (X,Y,[Z]))=[(X_0,Y_0)· Z_0]. Hence the point[(X_0,Y_0)· Z_0]is a regular value ofϕ^#and thus the mapϕ^#must be onto. Consequentlyϕis also onto. Using the bipartite basis in (<ref>), one can further construct a three-dimensional subspace of then-partite space^3⊗^d_2⊗...⊗^d_n, whose one-way locally distinguishable basis is unique (ignoring the phase factors). The subspace is spanned by |Ψ_1⟩=|1⟩⊗ (⊗^n_j=2|r_j1⟩)+|2⟩⊗ (⊗^n_j=2|r_j4⟩)+|3⟩⊗ (⊗^n_j=2|r_j7⟩), |Ψ_2⟩=|1⟩⊗ (⊗^n_j=2|r_j2⟩)+|2⟩⊗ (⊗^n_j=2|r_j5⟩)+|3⟩⊗ (⊗^n_j=2|r_j8⟩), |Ψ_3⟩=|1⟩⊗ (⊗^n_j=2|r_j3⟩)+|2⟩⊗ (⊗^n_j=2|r_j6⟩)+|3⟩⊗ (⊗^n_j=2|r_j9⟩), wherer_2k=r_kfor everykin (<ref>). Further, every state|r_jk⟩∈^d_jforj>2is normalized. ]
http://arxiv.org/abs/2307.01365v1
20230703214136
First results of the Laser-Interferometric Detector for Axions (LIDA)
[ "Joscha Heinze", "Alex Gill", "Artemiy Dmitriev", "Jiri Smetana", "Tiangliang Yan", "Vincent Boyer", "Denis Martynov", "Matthew Evans" ]
astro-ph.CO
[ "astro-ph.CO" ]
APS/123-QED j.heinze@bham.ac.uk University of Birmingham, School of Physics and Astronomy, Birmingham B15 2TT, United Kingdom. University of Birmingham, School of Physics and Astronomy, Birmingham B15 2TT, United Kingdom. University of Birmingham, School of Physics and Astronomy, Birmingham B15 2TT, United Kingdom. University of Birmingham, School of Physics and Astronomy, Birmingham B15 2TT, United Kingdom. University of Birmingham, School of Physics and Astronomy, Birmingham B15 2TT, United Kingdom. University of Birmingham, School of Physics and Astronomy, Birmingham B15 2TT, United Kingdom. LIGO, Massachusetts Institute of Technology, Cambridge, MA 02139, USA. University of Birmingham, School of Physics and Astronomy, Birmingham B15 2TT, United Kingdom. We present the operating principle and the first observing run of a novel kind of direct detector for axions and axion-like particles in the galactic halo. Our experiment is sensitive to the polarisation rotation of linearly polarised laser light induced by an axion field, and the first detector of its kind collecting science data. We discuss our current peak sensitivity of 1.44e-10GeV^-1 (95% confidence level) to the axion-photon coupling strength in the axion mass range of 1.97-2.01 neV which is, for instance, motivated by supersymmetric grand-unified theories. We also report on effects that arise in our high-finesse in-vacuum cavity at unprecedented optical continuous-wave intensity. Our detector already belongs to the most sensitive direct searches within its measurement band, and our first results pave the way towards surpassing the current sensitivity limits in the mass range from e-8eV down to e-16eV via quantum-enhanced laser interferometry. First results of the Laser-Interferometric Detector for Axions (LIDA) Denis Martynov August 1, 2023 ===================================================================== The existence of axions and axion-like particles (ALPs) is well-motivated in a variety of theoretical models. The axion was first introduced in 1977 as a promising candidate to resolve the strong charge-parity problem in quantum chromodynamics <cit.>. Here, it appears as a field-like Nambu-Goldstone boson in a spontaneously broken Peccei-Quinn symmetry and relaxes to a value which allows the electric dipole moment of the neutron to vanish. After this first proposal, axions as well as ALPs proved to arise generically from many extensions of the Standard Model, e.g. from string theory and supergravity <cit.>. Finally, they have also become a leading candidate for dark matter <cit.>. This is due to the aforementioned theoretical support, evidence from astronomical observations like gravitational lensing <cit.>, and since other dark matter candidates like weakly interacting massive particles have not been detected in a variety of attempts <cit.>. In light of the growing significance, various experimental approaches have been proposed, or already employed, to directly measure a signature of axions and ALPs, e.g. axion haloscopes (MADMAX <cit.> and DMRadio <cit.>), axion helioscopes (CAST <cit.> and IAXO <cit.>), “light shining though a wall” experiments (ALPS <cit.> and CROWS <cit.>) and magnetometers (ABRACADABRA <cit.>). However, no signature has been found yet which makes it essential to further diversify the search. In this Letter, we present LIDA, a laser-interferometric detector for axions based on Ref. <cit.> and related to the studies in Refs. <cit.>. LIDA uses the coupling of axions to photons, though not their conversion as in several other experiments, and represents a fairly new kind of detector which has not yet contributed to the axion science data. We will first reiterate the operating principle and design, and then discuss its performance in the first observing run. If dark matter is made of axions with mass m_a, it behaves like a coherent, classical field <cit.> a(t) = a_0 sin[Ω_a t + δ(t)] with angular frequency Ω_a = 2π f_a = m_ac^2/ħ, field amplitude a_0^2 = 2ρ_DMħ^2/m_a^2, the local density of dark matter ρ_ DM, and the phase of the field δ (t). The interaction Lagrangian for the axion-photon coupling reads <cit.> ℒ_aγ=-g_aγ/4aF^μνF_μν , where a is the axion field, F is the electro-magnetic field-strength tensor and g_aγ is the coupling coefficient. This coupling leads to a phase difference <cit.> Δϕ(t,τ)=g_aγ[a(t)-a(t-τ)] which accumulates between left- and right-handed circularly polarised light over a time period of τ. Equivalently, the polarisation axis of linearly polarised light is periodically rotated; this rotation is measurable with our detector. Hence, LIDA utilises a laser beam at optical angular frequency ω_pmp which is linearly polarised along the vertical axis (S-polarisation) as a pump field. As shown in Figure <ref>, this pump field is kept on resonance with a high-finesse cavity to amplify its optical power. If an axion field periodically rotates the polarisation axis of the circulating intra-cavity pump field, it excites two coherent light fields (sidebands) in the orthogonal P-polarisation at frequencies ω_pmp±Ω_a (signal field). These sidebands build up inside the cavity according to <cit.> E_sig,cav(±Ω_a)=-E_pmp,cavexp(iβ∓Ω_aτ/2+δ)/1-√(1-2T_sig-l_rt)exp[i(β∓Ω_aτ)] × g_aγτ/4sinc(Ω_aτ/4)cos(2β∓Ω_aτ/4)√(2τ_aρ_DM) where we assume the “rotating frame” by setting ω_pmp=0. E_pmp,cav is the circulating pump field, β is an extra cavity roundtrip phase which the signal field accumulates relative to the pump field, τ is the cavity roundtrip time, T_sig is the power transmissivity of the cavity input and output couplers for the signal field polarisation, l_rt is the cavity roundtrip power loss and τ_a is the coherence time of the axion field. β results from the cumulative effect of the four cavity mirrors and their coatings and leads to a non-degeneracy of the cavity's S- and P-eigenmodes (detuning). Hence, each sideband in the signal field is only resonantly enhanced if ±Ω_a is sufficiently close to the detuning frequency. In transmission of the cavity, we separate the signal field from the pump field via a polarising beamsplitter. In addition, a half-wave plate shifts a small constant fraction of the pump field into the signal polarisation to serve as a local oscillator E_LO=iξ√(T_pmp)E_pmp,cav, where ξ is twice the rotation angle of the half-wave plate and T_pmp is the power transmissivity of the cavity output coupler for the pump field polarisation. Finally, a photodetector measures the signal as the beat note between the local oscillator and the sidebands, yielding the following amplitude spectral density <cit.>: P_out(Ω_a)=l_outE_LO√(T_sig)[E^*_sig,cav(-Ω_a)-E_sig,cav(Ω_a)] with the optical loss in the readout beam path l_out. This signal yields a signal-to-noise ratio SNR of SNR^2=|P_out(Ω_a)/P_N(Ω_a)|^2√(T_meas/τ_a) , where P_N is the amplitude spectral density of the total noise and T_meas is the total measurement time. We now discuss the specifics of our setup as shown in Figure <ref> and the parameters achieved for the first observing run. The main laser source operates with a 300mW non-planar ring laser (NPRO) which continuously emits linearly polarised light in the mode at a wavelength of 1064nm. An electro-optic modulator (EOM) modulates the phase of the light field at a frequency of 5MHz. This enables the stabilisation of the laser frequency to the resonances of the in-vacuum cavity via the Pound-Drever-Hall scheme <cit.> using the signal from the photodetector PD_PDH in reflection of the cavity. The optical power that is injected into the cavity can be enhanced to about 18W by a neoLASE solid-state laser amplifier. A quarter- and half-wave plate finally tune the pump polarisation. For the first observing run, we injected 12W into the cavity in the S-polarisation. The rectangular in-vacuum cavity measures about 4.9m×10cm in size. The input and output couplers are nominally identical with measured power transmissivities at an angle of incidence of 45 of T_sig=0.13% and T_pmp=17ppm in the P- and S-polarisation, respectively. We inferred the respective pole frequencies from the cavity's transfer function for power modulations between the input and transmission to be f_p,P=6.76kHz and f_p,S=202Hz. This yields a finesse of ℱ_P=2220 and ℱ_P=74220 as well as an intra-cavity roundtrip loss of l_rt=51ppm. The other two cavity mirrors are highly reflective, the one on the readout side has a radius of curvature of 10.2m setting the beam waist of the cavity eigenmodes to about 1.1mm and 1.5mm on the horizontal and vertical axis, respectively. We measured small phase shifts between the P- and S-polarisation upon reflection off the cavity mirrors of 20mrad around an angle of incidence of 45 via an ellipsometer, and the current detuning between the cavity's P- and S-eigenmodes is around 480kHz which may slowly drift by a few kilohertz over time. This detuning corresponds to a sensitivity peak at an axion mass of about 2neV which is within the range motivated, e.g., by grand unified theories <cit.>. The detuning may be controlled by an auxiliary cavity in the future <cit.>. For the first observing run, the pump field diagnostics only consisted of an attenuation stage and a photodetector to track the transmitted (and thus circulating) power. For future observing runs, they may also serve as a sensor for a power stabilisation of the pump field. The signal field was split up by a 50:50 beamsplitter and measured by two photodetectors PD_out. The two PD signals were high-passed and summed up. The sum was demodulated at 475.4kHz, and, after an amplification by a factor of 50, the demodulated and amplified output signal was logged with a sampling rate of 65.5kHz. The current optical loss in the readout path amounts to l_out=5%, mainly due to the two beamsplitters. From the optical power in transmission of the cavity, we inferred an average and maximum circulating intra-cavity pump power of 118kW and 124kW, respectively. The latter corresponds to an optical intensity of 4.7MW at the waist position. To our knowledge, this level of intensity has not been reached before in any optical continuous-wave experiment. Our current signal readout path allows for a measurement frequency band of 475.4505.1kHz. Within this band, we were limited by electronic dark noise, quantum shot noise and technical laser noise (see Fig. <ref>). Shot noise is caused by vacuum fluctuations in the signal polarisation that co-propagate with the input pump field, are transmitted through the cavity and reach the readout. For future observing runs, we will add a squeezed light source to the input optics to mitigate the readout shot noise similarly to the gravitational-wave detectors Advanced LIGO <cit.>, Advanced Virgo <cit.> and GEO600 <cit.>. The technical laser noise can couple to the signal readout if the input polarisation is not perfectly tuned. In this case, a small fraction of the field that is injected into the cavity is in the signal polarisation and its technical noise is transmitted through the cavity at the detuning frequency. Hence, we had to carefully adjust the tuning of the input waveplates. Coherence measurements with the input intensity noise suggest that laser frequency noise dominates this technical noise coupling channel. For future observing runs, an additional cavity in the input beam path should be able to suppress technical laser intensity and frequency noise and significantly reduce its coupling to the readout. Assuming a signal-to-noise ratio of 2 in Eq. <ref>, corresponding to a confidence level of 95%, Figure <ref> shows that we reached a maximum sensitivity of g_aγ=1.44e-10GeV^-1 at 1.975neV, or 477.5kHz, in a measurement time of T_meas=85h, which is only a factor of 2.2 above the constraints set by the CAST helioscope <cit.>. The average sensitivity in the range of 1.97-2.01 neV was 2.1e-10GeV^-1. Our sensitivity is derived by averaging the amplitude spectral density of the readout signal over the total measurement time, subtracting the noise floor and calibrating the result with our theoretical model from Eq. <ref> using experimentally determined parameters. We have not measured a significant evidence for axions or ALPs. We will now discuss a few challenging and not yet completely explained aspects of LIDA which may also become relevant to similar detectors in Tokyo <cit.> and at the MIT <cit.>, and to high-intensity and high-finesse experiments, in general. First, if the intra-cavity pump power is sufficiently high, our cavity can assume at least two stable states when the laser frequency is stabilised to the cavity's TEM_0,0 eigenmode (locked) as shown in Figure <ref>. Each state is characterised by its circulating power, readout noise pattern and transmitted light field. We obtain the state with the highest circulating power when we lock the detector manually to start an observing run. This state corresponds to the lowest (“intial”) readout noise as well as to the purest transmitted field. However, when the detector relocks automatically after an external disturbance, it typically decays into a state with less circulating power, higher (“post-relock”) readout noise with additional noise peaks and a transmitted field in which the TEM_0,0 mode is superimposed with varying higher-order Hermite-Gaussian modes. We have not yet identified the exact mechanism behind this effect, but it is likely to have a thermal origin and limits the effective measurement time. Second, if we minimise the power on PD_out via the half- and quarter-wave plate and the PBS, the residual light resembles the Hermite-Gaussian HG_1,0 mode, associated with horizontal misalignment, in the signal polarisation. The power ratio between the TEM_0,0 pump field that is transmitted through the cavity and this residual light is only about 600, while the mode-filtering effect of the cavity should result in a ratio of >e8. Hence, we rather expect residual reflections from anti-reflective coatings to be the cause; however, so far, we could not identify the actual origin. To keep this residual light sufficiently below the saturation limit of PD_out, we used an array of two readout photodetectors. Third, the pump field that is transmitted through the cavity shows a significant amount of light in the signal polarisation, i.e. it is elliptically polarised, if linearly polarised light in the S-polaristion is injected. In transmission of the polarising beamsplitter in the readout, we consistently measure contrasts of only 6570%. A theoretical model of the cavity shows that this observation can be explained by a slight non-planarity of the cavity geometry which would cause a coupling of the external S- and P-polarisation. The measured contrast only requires a misalignment at the cavity mirrors of about 1mrad which is within a reasonable range. Moreover, we measured that the viewports of our vacuum system convert linearly into elliptically polarised light dependent on the point of tranmission; in general, this effect seems to grow with increasing distance from the viewport centre. Most likely, the reduced contrast in transmission of the PBS arises due to a combination of both effects, and we compensate for it with an additional quarter-wave plate in transmission of the cavity. This waveplate changes the phase relation between the signal and pump field but, since the current cavity detuning of 480kHz is relatively large, only one of the signal sidebands is effectively enhanced and measured. Hence, we can measure the signal in an arbitrary quadrature. In the future, we will try to reduce the cavity non-planarity and might switch to an in-vacuum readout. In conclusion, we presented the results of the first 85h-long observing run of a laser-interferometric detector for axions and axion-like particles (ALPs) called LIDA. Our current peak sensitivity to the axion-photon coupling coefficient g_aγ is inside an axion mass range of 1.97-2.01 neV where we reached up to 1.44e-10GeV^-1 at a 95% confidence level. This is only a factor of 2.2 higher than the CAST limit and among the most sensitive direct axion searches. Besides the electronic dark noise, we were limited by quantum shot noise and technical laser noise which will be further reduced by the implementation of a squeezed light source and an input mode cleaner cavity, respectively. From these techniques and an increase in the measurement time to the order of months, we expect to improve our sensitivity by at least one order of magnitude at axion masses of e-9eV. This would allow us to reach into a yet unexplored region of the mass-coupling parameter space. Moreover, we expect to reach a sensitivity about two orders of magnitude higher if we reduce the frequency separation of the cavity's S- and P-eigenmodes and measure axion masses down to e-12eV or lower, where the axion field exhibits a larger coherence time. These results are a highly promising milestone for advancing direct axion and ALP searches by expanding them to the field of quantum-enhanced laser interferometry. They are furthermore a strong argument to ultimately set LIDA up as a kilometre-scale detector, as done in the gravitational-wave research, which would further boost the sensitivity by several orders of magnitude <cit.>. We acknowledge members of the UK Quantum Interferometry collaboration for useful discussions, the support of the Institute for Gravitational Wave Astronomy at the University of Birmingham and STFC Quantum Technology for Fundamental Physics scheme (Grant No. ST/T006609/1 and ST/W006375/1). D.M. is supported by the 2021 Philip Leverhulme Prize.
http://arxiv.org/abs/2307.02669v1
20230705215104
Pattern formation and bifurcation analysis of delay induced fractional-order epidemic spreading on networks
[ "Jiaying Zhou", "Yong Ye", "Alex Arenas", "Sergio Gómez", "Yi Zhao" ]
physics.soc-ph
[ "physics.soc-ph", "cond-mat.stat-mech", "cs.SI", "math.DS" ]
mymainaddress,mysecondaryaddress]Jiaying Zhou jyzhou0513@gmail.com mymainaddress,mysecondaryaddress]Yong Ye yong_ye1994@163.com mysecondaryaddress]Alex Arenascor1 alexandre.arenas@urv.cat mysecondaryaddress]Sergio Gómez sergio.gomez@urv.cat mymainaddress]Yi Zhaocor1 [cor1]Corresponding author zhao.yi@hit.edu.cn [mymainaddress]School of Science, Harbin Institute of Technology, Shenzhen, 518055 P. R. China [mysecondaryaddress]Departament d'Enginyeria Informàtica i Matemàtiques, Universitat Rovira i Virgili, 43007 Tarragona, Spain The spontaneous emergence of ordered structures, known as Turing patterns, in complex networks is a phenomenon that holds potential applications across diverse scientific fields, including biology, chemistry, and physics. Here, we present a novel delayed fractional-order susceptible-infected-recovered-susceptible (SIRS) reaction-diffusion model functioning on a network, which is typically used to simulate disease transmission but can also model rumor propagation in social contexts. Our theoretical analysis establishes the Turing instability resulting from delay, and we support our conclusions through numerical experiments. We identify the unique impacts of delay, average network degree, and diffusion rate on pattern formation. The primary outcomes of our study are: (i) Delays cause system instability, mainly evidenced by periodic temporal fluctuations; (ii) The average network degree produces periodic oscillatory states in uneven spatial distributions; (iii) The combined influence of diffusion rate and delay results in irregular oscillations in both time and space. However, we also find that fractional-order can suppress the formation of spatiotemporal patterns. These findings are crucial for comprehending the impact of network structure on the dynamics of fractional-order systems. Time-fractional orderDelaySpatiotemporal patternAverage degree [2010] 92D3092C42 § INTRODUCTION The study of reaction-diffusion system patterns has been a central focus in research for a long time. The inception of these studies dates back to 1952 when Turing demonstrated that the activator-to-inhibitor diffusion coefficient ratio could cause the destabilization of a steady state, leading to the emergence of periodic spatial patterns <cit.>. This phenomenon is now known as the Turing pattern. Turing patterns have been observed in various scenarios, such as autocatalytic chemical reactions with inhibition <cit.>, epidemic spreading <cit.>, and even ecology <cit.>. Othmer and Scriven, as early as 1971, highlighted that Turing instability might occur in networked systems and play a significant role in the initial stages of biological morphogenesis, as it spreads through the network connections between cells <cit.>. They proposed a general mathematical framework to analyze network instability and further investigated it <cit.>, leading to a series of related works <cit.>. For instance, in 2010, Nakao and Mikhailov studied Turing patterns in large random networks and observed multiple steady-state coexistences and hysteresis effects <cit.>. Especially during the spread of epidemics, the diffusion of pathogens (similar substances) from high-density spatial regions to low-density spatial regions has led to the development of recognizable spatial explicit models. The research results of pattern dynamics can reveal the distribution structure of populations after spatial diffusion. This enables people to effectively utilize and control population resources. Additionally, these findings provide the scientific basis for preventing and controlling infectious diseases <cit.>. Delays are a widespread phenomenon in natural environments. They can be observed in the gestation period of animals, or in disease transmission models, where delays arise from latent periods or healing cycles, leading to periodic disease outbreaks <cit.>. Subsequently, several studies proposed the use of fractional derivative equations to establish mathematical models for predicting COVID-19 <cit.>. For example, in 2020, Zhang et al. demonstrated that impacts of death and human activities on nonlocal memory could be captured through a time-fractional derivative equation, contributing to our understanding of COVID-19's death and remission rates <cit.>. In the same year, Xu and colleagues proposed an improved fractional order SEIQRP model. When tested with epidemic data from the United States, this model successfully predicted short-term epidemic trends. Their results showed that the model effectively characterized the process of disease transmission, providing a theoretical basis for understanding the epidemic <cit.>. Given the universality of delays, in 2019, Chang and his colleagues examined delay-induced Turing patterns using the modified Leslie-Gower model. They analyzed pattern formation in various networks <cit.>. The following year, they studied Turing patterns on multiplex networks with both self-diffusion and cross-diffusion. Their research resulted in the discovery of heterogeneous patterns exhibiting rich characteristics <cit.>. Since the life cycle incorporates memory, fractional calculus equations have been employed to study system dynamics, as integral-order equations cannot account for this inherent memory <cit.>. Therefore, in 2022, Zheng et al. explored Turing patterns of a fractional-order system on a random network based on the SIR model, discovering that delay and diffusion coefficients influence pattern generation <cit.>. However, they used a small random network built with a certain probability, which could not effectively reveal the impact of the network's average degree on pattern formation. Motivated by Nakao and Mikhailov's work <cit.>, we aim to conduct research based on Erdős–Rényi (ER) random networks to reflect the average network degree's influence on pattern formation, which has been well-established in integer-order systems. To the best of our knowledge, there are limited frameworks that study the effects of delay, diffusion coefficient, time-fractional order, and network average degree on Turing patterns in a delay time-fractional order system. Consequently, this paper plans to introduce factors like delay, diffusion coefficient, and network average degree based on a simple SIRS model, and further investigate whether the time-fractional order affects the uniform stationary state of space, considering diffusion terms in the three-component system as in <cit.>. The delay-induced time-fractional SIRS equations are formulated as follows: { D^q S_i(t)=Λ-β S_i(t-τ) I_i(t-τ)-μ S_i(t)+ν R_i(t)+d_1 ∑_j=1^N A_i j(S_j-S_i), D^q I_i(t)=β S_i(t-τ) I_i(t-τ)-(γ+μ+α) I_i(t)+d_2 ∑_j=1^N A_i j(I_j-I_i), D^q R_i(t)=γ I_i(t)-(μ+ν) R_i(t), S_i(0)=u_i(t), I_i(0)=v_i(t), R_i(0)=w_i(t), . where D^q is the Caputo derivative and q ∈ (0, 1] is the order of the differential operator. S_i, I_i and R_i represent the density of S (susceptible), I (infected) and R (recovered) in node i. Disease transmission (reaction term) occurs inside the node. Concurrently, the diffusive flux of the susceptible S or infected I to node i is the diffusion term, which is expressed as ∑_j=1^N A_i j(S_j-S_i) or ∑_j=1^N A_i j(I_j-I_i), where i,j∈{1,2,…,N}. Here, A_ij is one if nodes i and j are connected, zero otherwise, i.e., A is the adjacency matrix of the diffusion network. We suppose this network is undirected, thus A is symmetric. Note that, for simplicity, we have considered that there is no diffusion for recovered individuals. Other parameters carry the following biological significance: Λ indicates the birth rate of S, β is the transmission rate between susceptible and infected populations, μ represents the natural mortality rate of populations S, I, and R, ν is the ratio at which the recovered population returns to the susceptible compartments without acquiring immunity, γ denotes the recovery rate of infected individuals, α is the disease-related death rate, τ corresponds to the disease's latent period, and d_1 and d_2 represent the self-diffusion coefficients of susceptible and infected, respectively. The remainder of this paper is organized as follows. In Sec. <ref>, we present the basic definition and stability lemma for fractional differential equations. In Sec. <ref>, we theoretically prove the stability of the model without delay and subsequently analyze the Turing instability condition induced by delay. In Sec. <ref>, we conduct relevant numerical experiments to validate the theoretical findings from previous sections and examine the effects of network average degree, delay, diffusion coefficient, and fractional order on the spatiotemporal pattern. Finally, we discuss the results of our analysis and provide an outlook for future work in Sec. <ref>. § PRELIMINARIES The Caputo fractional derivative is widely used in engineering applications due to its convenience. Therefore, we provide the definition of the Caputo fractional derivative and some essential lemmas for analyzing the stability of fractional-order systems as follows: <cit.> The Caputo fractional-order derivative is defined as _t_0^CD_t^q f(t)=1/Γ(n-q)∫_t_0^tf^(n)(τ)/(t-τ)^q+1-ndτ, where q∈ (n-1,n) and Γ(·) is Gamma function. In particular, we have _t_0^CD_t^qf(t)=1/Γ(1-q)∫_t_0^tf'(τ)/(t-τ)^qdτ. when q∈ (0,1). For convenience, we denote _t_0^CD_t^q f(t) as D^q f(t). <cit.> Consider the fractional-order system D_t^q x(t)=f(t, x(t)) with initial condition x(t_0)=x_t_0, where q ∈(0,1]. The equilibrium points are locally asymptotically stable if all eigenvalues λ_i of the Jacobian matrix ∂ f(t, x)/∂ x calculated at them satisfy |(λ_i)|> qπ/2. <cit.> Consider the following n-dimensional linear fractional-order system with time delay {[ D^q_1𝐰_1(t)=ϖ_11𝐰_1(t-τ_11)+ϖ_12𝐰_2(t-τ_12)+⋯+ϖ_1 n𝐰_n(t-τ_1 n),; D^q_2𝐰_2(t)=ϖ_21𝐰_1(t-τ_21)+ϖ_22𝐰_2(t-τ_22)+⋯+ϖ_2 n𝐰_n(t-τ_2 n),; ⋮; D^q_n𝐰_n(t)=ϖ_n 1𝐰_1(t-τ_n 1)+ϖ_n 2𝐰_2(t-τ_n 2)+⋯+ϖ_n n𝐰_n(t-τ_n n), ]. where q_i ∈(0,1), i=1,2, …, n. In system  (<ref>), define the time delay matrix τ=(τ_i j) ∈(ℝ^+)^n × n, the coefficient matrix ϖ=(ϖ_i j)∈ℝ^n × n, and then state variables 𝐰_i(t), 𝐰_i(t-τ_i j) ∈ℝ. Define Δ(λ) =[[ λ^q_1-ϖ_11 e^-λτ_11 -ϖ_12 e^-λτ_12 ⋯ -ϖ_1 n e^-λτ_1 n; -ϖ_21 e^-λτ_21 λ^q_2-ϖ_22 e^-λτ_22 ⋯ -ϖ_2 n e^-λτ_2 n; ⋮ ⋮ ⋱ ⋮; -ϖ_n 1 e^-λτ_n 1 -ϖ_n 2 e^-λτ_n 2 ⋯ λ^q_n-ϖ_n n e^-λτ_n n ]] . Then the zero solution of system (<ref>) is Lyapunov globally asymptotically stable if all the roots of the characteristic equation det(Δ(λ))=0 have negative real parts. § RESULTS In this section, we primarily focus on the Turing instability of system (<ref>). Utilizing the Turing stability theory for delayed reaction-diffusion models in continuous media, it is crucial to ensure that the endemic equilibrium of system (<ref>) is locally stable in the absence of diffusion and delay. To achieve this, we first need to investigate the stability of endemic equilibrium in the corresponding ordinary differential model. §.§ Stability analysis of the dynamic without diffusion and delay The equilibrium of system (<ref>) can be derived as follows { Λ-β S_*I_*-μ S_*+ν R_*=0, β S_* I_*-(γ+μ+α) I_*=0, γ I_*-(μ+ν) R_*=0. . So we have the endemic equilibrium E^*=(S_*, I_*, R_*), where S_*=γ+μ+α/β, I_*=(μ+ν)[βΛ-μ(γ+μ+α)]/β(γ+μ+α)μ+β(μ+α)ν, R_*=βγΛ-μγ(γ+μ+α)/β(γ+μ+α)μ+β(μ+α)ν. In addition, system (<ref>) has the disease-free equilibrium E^0=(Λ/μ, 0, 0). We mainly study the situation that diseases appear in the initial state, so in this article we do not consider the disease-free equilibrium E^0. When β<β_c, the disease-free equilibrium E^ 0 of system (<ref>) is locally asymptotically stable for all τ⩾ 0. The characteristic matrix of system (<ref>) at the disease-free equilibrium E^0 is Δ(λ)=([ λ^q+μ βΛ e^-λτ/μ -ν; 0 λ^q+(γ+μ+α)- βΛ e^-λτ/μ 0; 0 -γ λ^q+(μ+ν) ]). The characteristic equation at E^0 is det(Δ(λ)) =(λ^q+μ)(λ^q+γ+μ+α- βΛ e^-λτ/μ)( λ^q+μ+ν)=0. When τ=0, let s=λ^q, Eq. (<ref>) can be rewritten as (s+μ)(s+γ+μ+α-βΛ/μ)(s+μ+ν)=0. Hence, the eigenvalues are s_1=-μ, s_2=Λ/μ(β-β_c) and s_3=-μ-ν, where β_c=μ(γ+μ+α)/Λ. Obviously, |(s_1,2,3)|>qπ/2 if β<β_c. It follows from Lemma <ref> that the disease-free equilibrium E^0 is locally asymptotically stable if β<β_c. When τ≠ 0, the eigenvalues in the first and last terms of Eq. (<ref>) are obviously negative. Therefore, we only need to analyze the second term of Eq. (<ref>), λ^q+γ+μ+α- βΛ e^-λτ/μ=0 . Substituting λ=i ω, (w>0), into Eq. (<ref>) we have (i ω)^q+γ+μ+α- βΛ e^-i ωτ/μ=0, which is equivalent to ω^q(cos(q π/2) +i sin( q π/2) ) +γ+μ+α- βΛ/μ(cos (ωτ)-i sin (ωτ))=0. Separating the real and imaginary parts of Eq. (<ref>), one obtains { ω^qcos(q π/2)+γ+μ+α= βΛ/μcos (ωτ), ω^q sin(q π/2)=- βΛ/μsin(ωτ). . By adding the squares of the left and right sides of the two equations in Eq. (<ref>), we get ω^2 q +2(γ+μ+α) cos(q π/2) ω^q +(γ+μ+α+βΛ/μ)(γ+μ+α-βΛ/μ)=0 . If β<β_c, Eq. (<ref>) has no positive roots. Thus, Eq. (<ref>) has no pure imaginary roots. Hence, one obtains |(ω_1,2,3^q)|>q π/2. According to Lemma <ref>, E^0 is locally asymptotically stable. As the diffusion term analysis in the subsequent section involves the stability of the endemic equilibrium E^* when τ≠0, here we will limit our discussion to the stability of the endemic equilibrium E^* when τ=0. The characteristic matrix of system (<ref>) at the endemic equilibrium E^* if τ=0 is Δ(λ)=([ λ^q+μ+β I_* β S_* -ν; -βI_* λ^q+(γ+μ+α)-βS_* 0; 0 -γ λ^q+(μ+ν) ]). The characteristic equation at E^* is det(Δ(λ))= (λ^q+μ+β I_*)(λ^q+γ+μ+α- β S_*)( λ^q+μ+ν) +β I_*[ β S_*(λ^q+μ+ν)-γν]. Setting s=λ^q, Eq. (<ref>) can be rewritten as s^3+As^2+Bs+C=0, where A=2μ+ν+β I_*, B=μ(μ+ν)+(2μ+α+ν+γ)β I_*, C=[(μ+α)(μ+ν)+γμ]β I_*. According to Routh-Hurwitz criterion, when A>0, B>0, C>0, AB>C, E^* is locally asymptotically stable. Obviously, A>0, B>0, C>0, AB>C, if I^*>0. That is to say, E^* is locally asymptotically stable if β>β_c. §.§ Turing instability induced by delay In Sec. <ref>, we have given the conditions for the stability of endemic disease equilibrium, and on this basis, this section plans to study the impact of delay on the stability of system (<ref>). Therefore, the linearization form of the fractional-order system (<ref>) with a delay at the endemic disease equilibrium E^* is rewritten as follows { D^q S_i(t)=-μ S_i(t)+ν R_i(t)-βI_*S_i(t-τ)-βS_*I_i(t-τ)+d_1 ∑_j=1^n L_i j S_j, D^q I_i(t)=-(γ+μ+α) I_i(t)+βI_*S_i(t-τ)+βS_*I_i(t-τ)+d_2 ∑_j=1^n L_i j I_j, D^q R_i(t)=γ I_i(t)-(μ+ν) R_i(t), . where L_ij are the components of the graph Laplacian L corresponding to the diffusion graph with components A_ij, i.e., L=K-A, where K is the diagonal matrix with the degrees of the nodes. By applying the Laplace transform to both sides of system (<ref>), we obtain the following: {λ^q X_i-λ^q-1 u_i(0)= -μ X_i+ν Z_i-β I_*e^-λτ(X_i+∫_-τ^0 e^-λ t u_i(t) d t) -β S_* e^-λτ(Y_i+∫_-τ^0 e^-λ t v_i(t) d t)+d_1 ∑_j=1^n L_i j X_j, λ^q Y_i-λ^q-1 v_i(0)= -(γ+μ+α) Y_i+βI_* e^-λτ(X_i+∫_-τ^0 e^-λ t u_i(t) d t) +βS_* e^-λτ(Y_i+∫_-τ^0 e^-λ t v_i(t) d t)+d_2 ∑_j=1^n L_i j Y_j, λ^q Z_i-λ^q-1 w_i(0)= γ Y_i -(μ+ν)Z_i, . where X_i, Y_i, Z_i is the Laplace transform of S_i, I_i, R_i, respectively. System (<ref>) can be reformulated in the following matrix form: (A-D L_1) X=b, where A=([ λ^q+μ+β I_*e^-λτ β S_* e^-λτ -ν; -βI_* e^-λτ λ^q+(γ+μ+α)-βS_* e^-λτ 0; 0 -γ λ^q+(μ+ν) ]) ⊗ E , D=([ d_1 0 0; 0 d_2 0; 0 0 0 ]) ⊗ E, X=(X_1, X_2, …, X_n, Y_1, Y_2, …, Y_n, Z_1, Z_2, …, Z_n)^T, b=(b_11, b_12, … b_1 n, b_21, b_22, … b_2 n, b_31, b_32, … b_3 n)^T, ([ b_1 i; b_2 i; b_3 i ])=([ λ^q-1 u_i(0)-β I_*e^-λτ∫_-τ^0 e^-λ t u_i(t) d t-β S_*e^-λτ∫_-τ^0 e^-λ t v_i(t) d t; λ^q-1 v_i(0)+β I_*e^-λτ∫_-τ^0 e^-λ t u_i(t) d t+β I_*e^-λτ∫_-τ^0 e^-λ t v_i(t) d t; λ^q-1 w_i(0) ]), L_1=([ L 0 0; 0 L 0; 0 0 L ]), matrix E is an n× n identity matrix and ⊗ is kronecker product. A-D L_1 represents the characteristic matrix of system (<ref>). Since the Laplacian matrix is a real symmetric matrix, it can be diagonalized. An orthonormal basis ϕ_i makes the following equation hold: L_1 ϕ=Λϕ , where Λ_i is the eigenvalue of L, ϕ=(ϕ_1, …, ϕ_n, ϕ_1, …, ϕ_n, ϕ_1, …, ϕ_n)^T is an invertible matrix, ϕ_i is the eigenvector of Λ_i, and Λ =([ Λ^(1) 0 0; 0 Λ^(1) 0; 0 0 Λ^(1) ]), Λ^(1) =([ Λ_1 0 … 0; 0 Λ_2 … 0; 0 … … 0; 0 … 0 Λ_n ]) . Supposing X=ϕ Y, system (<ref>) can be rewritten as A ϕ Y-D L_1 ϕ Y=b ⇒ A ϕ Y-D Λϕ Y=b ⇒(A-D Λ) X=b . Thus, system (<ref>) can be reduced to ([ λ^q+μ+β I_*e^-λτ-d_1 Λ_i β S_* e^-λτ -ν; -βI_* e^-λτ λ^q+(γ+μ+α)-βS_* e^-λτ-d_2 Λ_i 0; 0 -γ λ^q+(μ+ν) ])([ X_i; Y_i; Z_i ])=([ b_1 i; b_2 i; b_3 i ]) . It is widely recognized that initial values do not affect the stability of linear fractional differential systems. Assuming all initial values are zero, the stability of system (<ref>) can be determined by: ([ λ^q+μ+β I_*e^-λτ-d_1 Λ_i β S_* e^-λτ -ν; -βI_* e^-λτ λ^q+(γ+μ+α)-βS_* e^-λτ-d_2 Λ_i 0; 0 -γ λ^q+(μ+ν) ])([ X_i; Y_i; Z_i ])=([ 0; 0; 0 ]) . Therefore, the stability of system (<ref>) depends on the following characteristic equation |[ λ^q+μ+β I_*e^-λτ-d_1 Λ_i β S_* e^-λτ -ν; -βI_* e^-λτ λ^q+(γ+μ+α)-βS_* e^-λτ-d_2 Λ_i 0; 0 -γ λ^q+(μ+ν) ]|=0, namely, P_1(λ)+P_2(λ) e^-λτ=0, where P_1(λ)= λ^3 q+(γ+3 μ+α+ν) λ^2 q+[(γ+μ+α) μ +(μ+ν)(γ+2 μ+α)] λ^q -(d_1+d_2) Λ_i λ^2 q-[d_1(γ+μ+α)+μ d_2+(μ+ν)(d_1+d_2)] Λ_i λ^q +d_1 d_2 Λ_i^2 λ^q+(μ+ν)(γ+μ+α) μ -(μ+ν)[d_1(γ+μ+α)+μ d_2] Λ_i+d_1 d_2(μ+ν) Λ_i^2, P_2(λ)= β(I_*-S_*) λ^2 q +[(γ+μ+α) β I_*+β(I_*-S_*)(μ+ν)-μβ S_*] λ^q +(d_1 β S_*-d_2 β I_*) Λ_i λ^q +(μ+ν)(d_1 β S_*-d_2 β I_*) Λ_i +[(γ+μ+α) β I_*-μβ S_*](μ+ν)-β I_* γν. We substitute λ=i ω=ω(cos(π/2)+i sin(π/2))=ω e^i π/2 into system (<ref>), and have (A_1+i B_1)+(A_2+i B_2)(cos (ωτ)-i sin (ωτ))=0, where A_1= ω^3 qcos (3 / 2 q π)+(γ+3 μ+α+ν) ω^2 qcos (q π) +[(γ+μ+α) μ +(μ+ν)(γ+2 μ+α)] ω^ qcos (1 / 2 q π) -(d_1+d_2) Λ_i ω^2 qcos (q π)+d_1 d_2 Λ_i^2 ω^ qcos (1 / 2 q π) -[d_1(γ+μ+α)+μ d_2+(μ+ν)(d_1+d_2)] Λ_i ω^ qcos (1 / 2 q π) +(μ+ν)(γ+μ+α) μ-(μ+ν)[d_1(γ+μ+α)+μ d_2] Λ_i+d_1 d_2(μ+ν) Λ_i^2, B_1= ω^3 qsin (3 / 2 q π)+(γ+3 μ+α+ν) ω^2 qsin (q π) +[(γ+μ+α) μ +(μ+ν)(γ+2 μ+α)] ω^ qsin (1 / 2 q π) -(d_1+d_2) Λ_i ω^2 qsin (q π)+d_1 d_2 Λ_i^2 ω^ qsin (1 / 2 q π) -[d_1(γ+μ+α)+μ d_2+(μ+ν)(d_1+d_2)] Λ_i ω^ qsin (1 / 2 q π), A_2= β(I_*-S_*) ω^2 qcos (q π) +(d_1 β S_*-d_2 β I_*) Λ_i ω^ qcos (1 / 2 q π) +[(γ+μ+α) β I_*+β(I_*-S_*)(μ+ν)-μβ S_*] ω^ qcos (1 / 2 q π) +(μ+ν)(d_1 β S_*-d_2 β I_*) Λ_i +[(γ+μ+α) β I_*-μβ S_*](μ+ν)-β I_* γν, B_2= β(I_*-S_*) ω^2 qsin (q π)+(d_1 β S_*-d_2 β I_*) Λ_i ω^ qsin (1 / 2 q π) +[(γ+μ+α) β I_*+β(I_*-S_*)(μ+ν)-μβ S_*] ω^ qsin (1 / 2 q π). Separating the real and imaginary parts of Eq. (<ref>), one obtains { A_2 cos (ωτ)+B_2 sin (ωτ)=-A_1, -A_2 sin (ωτ)+B_2 cos (ωτ)=-B_1, . then, { (A_2^2 + B_2^2) cos (ωτ) =-B_1B_2-A_1A_2, (A_2^2 + B_2^2) sin (ωτ) =B_1A_2-A_1B_2. . By adding the squares of the left and right sides of the two equations in Eq. (<ref>), we get (A_2^2 + B_2^2)^2=(B_1B_2+A_1A_2)^2+(B_1A_2-A_1B_2)^2, where ω can be solved from system (<ref>). The critical value of τ_c is τ_c=min _i, k{1/ω_karccos(-B_1B_2-A_1A_2/A_2^2+B_2^2)+2 π/ω_k}, where index i refers to the ith node, and ω_k represent all the positive roots of system (<ref>). Also, we have the transversality condition d λ/d τ=λ P_2(λ) e^-λτ/P_1^'(λ)+P_2^'(λ) e^-λτ-τ P_2(λ) e^-λτ=M/N, and Re[d λ/d τ]=M_1 N_1+M_2 N_2/N_1^2+N_2^2, where M_1= - β(I_*-S_*)ω^2q+1sin (π q) cos (ωτ)+β(I_*-S_*)ω^2q+1cos (π q) sin (ωτ) -[(γ+μ+α) β I_*+β(I_*-S_*)(μ+ν)-μβ S_*+(d_1 β S_*-d_2 β I_*) Λ_i] ω^q+1sin (1 / 2π q) cos (ωτ) +[(γ+μ+α) β I_*+β(I_*-S_*)(μ+ν)-μβ S_*+(d_1 β S_*-d_2 β I_*) Λ_i] ω^q+1cos (1 / 2π q) sin (ωτ) +{ (μ+ν)(d_1 β S_*-d_2 β I_*) Λ_i +[(γ+μ+α) β I_*-μβ S_*](μ+ν)-β I_* γν}ωsin (ωτ), M_2= β(I_*-S_*)ω^2q+1cos (π q) cos (ωτ)+β(I_*-S_*)ω^2q+1sin (π q) sin (ωτ) +[(γ+μ+α) β I_*+β(I_*-S_*)(μ+ν)-μβ S_*+(d_1 β S_*-d_2 β I_*) Λ_i] ω^q+1cos (1 / 2π q) cos (ωτ) +{ (μ+ν)(d_1 β S_*-d_2 β I_*) Λ_i +[(γ+μ+α) β I_*-μβ S_*](μ+ν)-β I_* γν}ωcos (ωτ) +[(γ+μ+α) β I_*+β(I_*-S_*)(μ+ν)+μβ S_*+(d_1 β S_*-d_2 β I_*) Λ_i] ω^q+1sin (1 / 2π q) sin (ωτ), N_1= 3 αω^3q-1sin (3/2π q)+2 α[(γ+3 μ+α+ν)-(d_1+d_2)Λ_i ]ω^2q-1sin (πα) + α{[(γ+μ+α) μ +(μ+ν)(γ+2 μ+α)]-[d_1(γ+μ+α)+μ d_2+(μ+ν)(d_1+d_2)] Λ_i +d_1 d_2 Λ_i^2 } ×ω^q-1sin (1/2π q)+2 αβ(I_*-S_*)ω^2q-1( sin (π q)cos(ωτ)-cos (π q)sin (ωτ)) + α[(γ+μ+α) β I_*+β(I_*-S_*)(μ+ν)-μβ S_*+(d_1 β S_*-d_2 β I_*) Λ_i] ×ω^q-1( sin (1/2π q)cos(ωτ)-cos (1/2π q)sin (ωτ)) -τβ(I_*-S_*) ω^2q( cos (π q)cos(ωτ)+sin (π q)sin (ωτ)) -τ[(γ+μ+α) β I_*+β(I_*-S_*)(μ+ν)-μβ S_*+(d_1 β S_*-d_2 β I_*) Λ_i] ×ω^q( cos (1/2π q)cos(ωτ)+sin (1/2π q)sin (ωτ)) +{τβ I_* γν-τ(μ+ν)(d_1 β S_*-d_2 β I_*) Λ_i-τ[(γ+μ+α) β I_*-μβ S_*](μ+ν)}cos(ωτ), N_2= -3 αω^3q-1cos (3/2π q)-2 α[(γ+3 μ+α+ν)-(d_1+d_2)Λ_i ]ω^2q-1cos (π q) - α{[(γ+μ+α) μ +(μ+ν)(γ+2 μ+α)]-[d_1(γ+μ+α)+μ d_2+(μ+ν)(d_1+d_2)] Λ_i +d_1 d_2 Λ_i^2 } ×ω^q-1cos (1/2π q)-2 αβ(I_*-S_*)ω^2q-1( cos(π q)cos (ωτ)+sin(π q)sin (ωτ)) - α[(γ+μ+α) β I_*+β(I_*-S_*)(μ+ν)-μβ S_*+(d_1 β S_*-d_2 β I_*) Λ_i] ×ω^q-1(cos(1/2π q)cos (ωτ)+sin(1/2π q)sin (ωτ)) -τβ(I_*-S_*) ω^2q( sin(π q)cos (ωτ)-cos(π q)sin (ωτ)) -τ[(γ+μ+α) β I_*+β(I_*-S_*)(μ+ν)-μβ S_*+(d_1 β S_*-d_2 β I_*) Λ_i] ×ω^q( sin(1/2π q)cos (ωτ)-cos(1/2π q)sin (ωτ)) -{τβ I_* γν-τ(μ+ν)(d_1 β S_*-d_2 β I_*) Λ_i-τ[(γ+μ+α) β I_*-μβ S_*](μ+ν)}sin (ωτ). Furthermore, .M(ω i)|_τ=τ_c=M_1+iM_2, .N(ω i)|_τ=τ_c=N_1+iN_2, where M_1, M_2, N_1, N_2 are the real and imaginary parts of M(λ), N(λ). We suppose τ is the control parameter and through simple calculations, it can be concluded that Re[d λ/d τ]_τ=τ_c, ω=ω_c=M_1 N_1+M_2 N_2/N_1^2+N_2^2≠ 0, where ω_c is the corresponding frequency of τ_c. Thus, based on the above analysis and Hopf bifurcation theory, one has the following results. Turing instability induced by delay. * If Re[d λ/d τ]_τ=τ_c, ω=ω_c>0, Turing instability occurs in system (<ref>) when τ>τ_c. * If Re[d λ/d τ]_τ=τ_c, ω=ω_c<0, Turing instability occurs in system (<ref>) when τ<τ_c. § NUMERICAL ANALYSIS In this section, we aim to design several numerical experiments to validate the theoretical analysis. First, we calculate the stability conditions of the endemic equilibrium E^* without the diffusion term (i.e., β>μ(γ+μ+α)/Λ), meaning that we need to ensure the stability of system (<ref>) without delay, and then investigate the effect of delay on the system. Consequently, we set the parameter values as Λ=5, μ=0.035, ν=0.05, γ=0.2, α=0.01, β=0.006, d_1=0, d_2=0, and q=1. This ensures the stability of the non-delay and non-diffusion model (<ref>), as β>β_c=0.0017. Furthermore, based on this, we find that by suitably increasing the delay value, the model transitions from stability to instability, with the critical delay value being τ_c ≈ 23.06 (see Fig. <ref>). Moreover, when calculating and examining the induced instability conditions, we discover that the fractional order and diffusion terms considered by the model also play a crucial role. Thus, we also provide the corresponding fractional-order threshold of q=0.95 with non-diffusion and τ_c ≈ 33.32 (see Fig. <ref>). We observe that a larger delay is required to render the model (<ref>) unstable as the fractional order decreases (see Fig. <ref>). Next, we present three sets of experiments: (1) when q=0.95, we choose τ_1=30 and τ_2=40, satisfying τ_1<τ_c=33.32<τ_2. The time series plot and bifurcation diagram with τ as the bifurcation parameter are provided (see Fig. <ref>(a,b)). (2) When q=1, we select τ_1=20 and τ_2=30, satisfying τ_1<τ_c=23.06=τ_2. The time series plot and bifurcation diagram with τ as the bifurcation parameter are provided (see Fig. <ref>(c,d)). (3) When τ=30, we choose q_1=0.95 and q_2=1, satisfying q_1<0.965<q_2. The time series plot and bifurcation diagram with q as the bifurcation parameter is provided (see Fig. <ref>(e,f)). Similarly, we also demonstrate the corresponding relationship between the eigenvalue of the Laplace matrix Λ_i and the delay threshold τ_c in the cases of q=0.95 and q=1 for the fractional order when diffusion is considered (see Fig. <ref>). To examine the pattern generation of fractional-order systems on a network with N=100 nodes and explore the effects of delay, network topology, and diffusion coefficients on the pattern, we have designed three additional sets of experiments. We have considered: different delays, τ=20 in Fig. <ref>(a,b) and Fig. <ref>(a,b), and τ=40 in Figs. <ref>(c,d,e,f) and Fig. <ref>(c,d,e,f); different average degrees, ⟨ k⟩=5 in Fig. <ref>(a,b,d,e) and Fig. <ref>(a,b,d,e), and ⟨ k⟩=8 in Fig. <ref>(c,f) and Fig. <ref>(c,f); and different diffusion coefficients, d_1=0.01,d_2=0.02 in Fig. <ref>(a,c,d) and Fig. <ref>(a,c,d), and d_1=0.01,d_2=0.08 in Fig. <ref>(b,e,f) and Fig. <ref>(b,e,f). From the results obtained, we observe that spatial patterns emerge only when τ=40, d_1=0.01, d_2=0.08, and ⟨ k⟩=5, with instability in space, see Fig. <ref>(e) and Fig. <ref>(e). However, this phenomenon is irregular and distinct from the traditional Turing instability in space uniform stability, dividing into two parts (high and low abundance). In particular, as the delay increases, the system first undergoes a periodic oscillation state in time, then interacts with non-uniform oscillation in space due to the diffusion coefficient's influence at a specific time, resulting in irregular spatial non-uniform oscillation. It is worth noting that, when we exclude the interference caused by delay-induced time-periodic oscillation and study only whether diffusion will result in pattern emergence, we do not observe pattern generation, see Fig. <ref>(a,b) and Fig. <ref>(a,b). We believe this is solely related to the SIRS model we study. Moreover, by observing Fig. <ref>(e,f) and Fig. <ref>(e,f), we also find that spatial non-uniform oscillation gradually disappears with the increase of the network's average degree. The time-period oscillation does not change with the network topology's variation, see Fig. <ref>(c,d) and Fig. <ref>(c,d). We have conducted a comparative experiment to investigate the impact of fractional order on pattern generation, by designing two groups of experiments with q=1, see Fig. <ref>(a,b,c), and q=0.95, see Fig. <ref>(d,e,f), while keeping other parameters the same. The results show that, when q=1, the model generates spatiotemporal patterns, Fig. <ref>(b), but when q=0.95, the spatiotemporal patterns disappeared, Fig. <ref>(e). The density of infected individuals, I_i, on the ER random network over time and the curves of the maximum, minimum, and average values of the infected individual density across all nodes on the network over time are also shown in Fig. <ref>(a,c,d,f) to support our observation. Note that the forward Euler method was used as the primary numerical method, with Δ t=1, T=20000, h=0.1, and the 2D simulation regions (x,y) ∈Ω=[0,4] ×[0,4] under Neumann boundary conditions. The Laplacian matrix was rewritten as a Laplace operator Δ in continuous media. Finally, Fig. <ref>(a,b,c) show interesting patterns that appear due to the influence of the initial values, confirming that system (<ref>) also has spatiotemporal patterns in continuous media. Finally, to better reflect the visualization of the main conclusions, we take several factors mainly considered in this paper, such as the network average degree, delay, and fractional order, as independent variables. With the help of numerical simulation, the evolution process of spatiotemporal patterns can be observed, as shown in Fig. <ref> and Fig. <ref>. Among them, from Fig. <ref>(a) we can learn that, as the average degree increases, the spatial distribution of the population gradually becomes uniform, which means that the network average degree will inhibit the generation of spatial patterns. Similarly, when we fix the value of other parameters and change the delay parameter, it can be intuitively deduced that spatial patterns will appear as the delay increases, see Fig. <ref>(b), which is also in good agreement with the theoretical analysis results. Considering the particularity of fractional-order systems, exciting phenomena occur when we change the order of fractional order. As the order increases, uniform spatial distribution is broken, resulting in spatiotemporal patterns. In other words, a decrease in the fractional order will inhibit the generation of spatiotemporal patterns. see Fig. <ref>(c). In addition, we have also tested the evolution of spatiotemporal patterns with diffusion rate changes under two delays, τ=20 for Fig. <ref>(a), and τ=40 for Fig. <ref>(b). These results show that a single delay or diffusion rate effect does not lead to the generation of spatiotemporal patterns. § CONCLUSIONS Our results provide a new perspective for the research of delay-induced time-fractional order systems on networks. Among them, network topology, diffusion coefficient, and delay have essential effects on the excitation of the Turing pattern and the final differentiation of steady-state nodes. In the case of non-diffusion, the time-periodic oscillation phenomenon of the system is closely related to fractional order and delay. The reduction of fractional order promotes the stability of the system, while delay causes the instability of the system and leads to periodic oscillations. When we consider the diffusion term, with appropriate parameter values, the time-periodic oscillation phenomenon still exists and is not affected by the network topology. In particular, when the delay, diffusion coefficient, and average degree of the network are at appropriate values, an interesting phenomenon occurs, namely, irregular spatial non-uniform oscillation. Our explanation for this phenomenon is that the system first experiences a time-periodic oscillation state with the increase of delay, and then interacts with the non-uniform oscillation in space due to the effect of the diffusion coefficient at a particular time, resulting in the generation of irregular spatial non-uniform oscillation, i.e., spatiotemporal patterns. It should be noted that the reason why fractional order inhibits the generation of spatiotemporal patterns is explained by the fact that fractional order affects temporal periodic oscillations, leading to the disappearance of the original temporal and spatial interaction, which does not alter the original uniform spatial distribution. Considering that the only network type studied in this paper is the Erdős-Rényi, we still know little about what effect the higher-order structure of the network has on the Turing pattern of the fractional-order system. Although some recent works have studied the pattern formation of networks with higher-order structures, most of them are based on integer-order systems <cit.>. Therefore, in future research, we will study the pattern formation of the framework based on the fractional-order system and the higher-order structure of the network. § DECLARATION OF COMPETING INTEREST The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. § DATA AVAILABILITY No data was used for the research described in this article. § ACKNOWLEDGEMENTS J.Z., Y.Y. and Y.Z. acknowledge support from the Nature Science Foundation of Guangdong Province (2020A1515010812 and 2021A1515011594). J.Z. acknowledges support by the scholarship from the China Scholarship Council (202106120290). Y.Y. acknowledges support by the scholarship from the China Scholarship Council (202206120230). A.A. and S.G. acknowledge support from Spanish Ministerio de Ciencia e Innovación (PID2021-128005NB-C21), Generalitat de Catalunya (2021SGR-00633) and Universitat Rovira i Virgili (2022PFR-URV-56). A.A. also acknowledges support from ICREA Academia, and the James S. McDonnell Foundation (220020325). This work was carried out by Y.Y. and J.Z. during their tenure as visiting students at the Universitat Rovira i Virgili (URV). 10 url<#>1urlprefixURL href#1#2#2 #1#1 turing1952chemical A. Turing, The chemical basis of morphogenesis, Philosophical Transactions of the Royal Society of London Series B 237 (641) (1952) 37–72. http://dx.doi.org/https://doi.org/10.1098/rstb.1952.0012 doi:https://doi.org/10.1098/rstb.1952.0012. prigogine1968symmetry I. Prigogine, R. Lefever, Symmetry breaking instabilities in dissipative systems. ii, The Journal of Chemical Physics 48 (4) (1968) 1695–1700. http://dx.doi.org/https://doi.org/10.1063/1.1668896 doi:https://doi.org/10.1063/1.1668896. castets1990experimental V. Castets, E. Dulos, J. Boissonade, P. De Kepper, Experimental evidence of a sustained standing turing-type nonequilibrium chemical pattern, Physical Review Letters 64 (24) (1990) 2953. http://dx.doi.org/10.1103/PhysRevLett.64.2953 doi:10.1103/PhysRevLett.64.2953. ouyang1991transition Q. Ouyang, H. L. Swinney, Transition from a uniform state to hexagonal and striped turing patterns, Nature 352 (6336) (1991) 610–612. http://dx.doi.org/https://doi.org/10.1038/352610a0 doi:https://doi.org/10.1038/352610a0. asllanni22 M. Asllani, B. A. Siebert, A. Arenas, J. P. Gleeson, https://doi.org/10.1063/5.0060466Symmetry-breaking mechanism for the formation of cluster chimera patterns, Chaos: An Interdisciplinary Journal of Nonlinear Science 32 (1), 013107. http://dx.doi.org/10.1063/5.0060466 doi:10.1063/5.0060466. <https://doi.org/10.1063/5.0060466> sun2012pattern G.-Q. Sun, Pattern formation of an epidemic model with diffusion, Nonlinear Dynamics 69 (3) (2012) 1097–1104. http://dx.doi.org/https://doi.org/10.1007/s11071-012-0330-5 doi:https://doi.org/10.1007/s11071-012-0330-5. chang2020cross L. Chang, M. Duan, G. Sun, Z. Jin, Cross-diffusion-induced patterns in an SIR epidemic model on complex networks, Chaos: An Interdisciplinary Journal of Nonlinear Science 30 (1) (2020) 013147. http://dx.doi.org/https://doi.org/10.1063/1.5135069 doi:https://doi.org/10.1063/1.5135069. chang2022optimal L. Chang, S. Gao, Z. Wang, Optimal control of pattern formations for an SIR reaction–diffusion epidemic model, Journal of Theoretical Biology 536 (2022) 111003. http://dx.doi.org/https://doi.org/10.1016/j.jtbi.2022.111003 doi:https://doi.org/10.1016/j.jtbi.2022.111003. chang2022sparse L. Chang, W. Gong, Z. Jin, G.-Q. Sun, Sparse optimal control of pattern formations for an SIR reaction-diffusion epidemic model, SIAM Journal on Applied Mathematics 82 (5) (2022) 1764–1790. http://dx.doi.org/https://doi.org/10.1137/22M1472127 doi:https://doi.org/10.1137/22M1472127. zheng2022pattern Q. Zheng, V. Pandey, J. Shen, Y. Xu, L. Guan, Pattern dynamics in the epidemic model with diffusion network, Europhysics Letters 137 (4) (2022) 42002. http://dx.doi.org/10.1209/0295-5075/ac58bd doi:10.1209/0295-5075/ac58bd. zhou2022complex J. Zhou, Y. Zhao, Y. Ye, Complex dynamics and control strategies of seir heterogeneous network model with saturated treatment, Physica A: Statistical Mechanics and its Applications 608 (2022) 128287. http://dx.doi.org/https://doi.org/10.1016/j.physa.2022.128287 doi:https://doi.org/10.1016/j.physa.2022.128287. fernandes2012turing L. D. Fernandes, M. De Aguiar, Turing patterns and apparent competition in predator-prey food webs on networks, Physical Review E 86 (5) (2012) 056203. http://dx.doi.org/10.1103/PhysRevE.86.056203 doi:10.1103/PhysRevE.86.056203. zhang2014delay T. Zhang, H. Zang, Delay-induced turing instability in reaction-diffusion equations, Physical Review E 90 (5) (2014) 052908. http://dx.doi.org/10.1103/PhysRevE.90.052908 doi:10.1103/PhysRevE.90.052908. liu2019pattern H. Liu, Y. Ye, Y. Wei, W. Ma, M. Ma, K. Zhang, Pattern formation in a reaction-diffusion predator-prey model with weak allee effect and delay, Complexity 2019 (2019) 6282958. http://dx.doi.org/https://doi.org/10.1155/2019/6282958 doi:https://doi.org/10.1155/2019/6282958. othmer1971instability H. G. Othmer, L. Scriven, Instability and dynamic pattern in cellular networks, Journal of Theoretical Biology 32 (3) (1971) 507–537. http://dx.doi.org/https://doi.org/10.1016/0022-5193(71)90154-8 doi:https://doi.org/10.1016/0022-5193(71)90154-8. othmer1974non H. G. Othmer, L. Scriven, Non-linear aspects of dynamic pattern in cellular networks, Journal of Theoretical Biology 43 (1) (1974) 83–112. http://dx.doi.org/https://doi.org/10.1016/S0022-5193(74)80047-0 doi:https://doi.org/10.1016/S0022-5193(74)80047-0. horsthemke2004network W. Horsthemke, K. Lam, P. K. Moore, Network topology and turing instabilities in small arrays of diffusively coupled reactors, Physics Letters A 328 (6) (2004) 444–451. http://dx.doi.org/https://doi.org/10.1016/j.physleta.2004.06.044 doi:https://doi.org/10.1016/j.physleta.2004.06.044. moore2005localized P. K. Moore, W. Horsthemke, Localized patterns in homogeneous networks of diffusively coupled reactors, Physica D: Nonlinear Phenomena 206 (1-2) (2005) 121–144. http://dx.doi.org/https://doi.org/10.1016/j.physd.2005.05.002 doi:https://doi.org/10.1016/j.physd.2005.05.002. petit2017theory J. Petit, B. Lauwens, D. Fanelli, T. Carletti, Theory of turing patterns on time varying networks, Physical Review Letters 119 (14) (2017) 148301. http://dx.doi.org/10.1103/PhysRevLett.119.148301 doi:10.1103/PhysRevLett.119.148301. zheng2020turing Q. Zheng, J. Shen, Y. Xu, Turing instability in the reaction-diffusion network, Physical Review E 102 (6) (2020) 062215. http://dx.doi.org/10.1103/PhysRevE.102.062215 doi:10.1103/PhysRevE.102.062215. muolo2023turing R. Muolo, L. Gallo, V. Latora, M. Frasca, T. Carletti, Turing patterns in systems with high-order interactions, Chaos, Solitons & Fractals 166 (2023) 112912. http://dx.doi.org/https://doi.org/10.1016/j.chaos.2022.112912 doi:https://doi.org/10.1016/j.chaos.2022.112912. nakao2010turing H. Nakao, A. S. Mikhailov, Turing patterns in network-organized activator–inhibitor systems, Nature Physics 6 (7) (2010) 544–550. http://dx.doi.org/https://doi.org/10.1038/nphys1651 doi:https://doi.org/10.1038/nphys1651. sun2016pattern G.-Q. Sun, M. Jusup, Z. Jin, Y. Wang, Z. Wang, Pattern transitions in spatial epidemics: Mechanisms and emergent properties, Physics of life reviews 19 (2016) 43–73. http://dx.doi.org/10.1016/j.plrev.2016.08.002 doi:10.1016/j.plrev.2016.08.002. stancevic2013turing O. Stancevic, C. Angstmann, J. M. Murray, B. I. Henry, Turing patterns from dynamics of early hiv infection, Bulletin of mathematical biology 75 (2013) 774–795. http://dx.doi.org/10.1007/s11538-013-9834-5 doi:10.1007/s11538-013-9834-5. ye2021bifurcation Y. Ye, Y. Zhao, Bifurcation analysis of a delay-induced predator–prey model with allee effect and prey group defense, International Journal of Bifurcation and Chaos 31 (10) (2021) 2150158. http://dx.doi.org/https://doi.org/10.1142/S0218127421501583 doi:https://doi.org/10.1142/S0218127421501583. ye2022promotion Y. Ye, Y. Zhao, J. Zhou, Promotion of cooperation mechanism on the stability of delay-induced host-generalist parasitoid model, Chaos, Solitons & Fractals 165 (2022) 112882. http://dx.doi.org/https://doi.org/10.1016/j.chaos.2022.112882 doi:https://doi.org/10.1016/j.chaos.2022.112882. zhou2022bifurcation J. Zhou, Y. Zhao, Y. Ye, Y. Bao, Bifurcation analysis of a fractional-order simplicial SIRS system induced by double delays, International Journal of Bifurcation and Chaos 32 (05) (2022) 2250068. http://dx.doi.org/https://doi.org/10.1142/S0218127422500687 doi:https://doi.org/10.1142/S0218127422500687. lu2020fractional Z. Lu, Y. Yu, Y. Chen, G. Ren, C. Xu, S. Wang, Z. Yin, A fractional-order seihdr model for covid-19 with inter-city networked coupling effects, Nonlinear dynamics 101 (3) (2020) 1717–1730. http://dx.doi.org/10.1007/s11071-020-05848-4 doi:10.1007/s11071-020-05848-4. kilicman2018fractional A. Kilicman, et al., A fractional order sir epidemic model for dengue transmission, Chaos, Solitons & Fractals 114 (2018) 55–62. http://dx.doi.org/10.1016/j.chaos.2018.06.031 doi:10.1016/j.chaos.2018.06.031. higazy2020novel M. Higazy, Novel fractional order sidarthe mathematical model of covid-19 pandemic, Chaos, Solitons & Fractals 138 (2020) 110007. http://dx.doi.org/10.1016/j.chaos.2020.110007 doi:10.1016/j.chaos.2020.110007. zhang2020applicability Y. Zhang, X. Yu, H. Sun, G. R. Tick, W. Wei, B. Jin, Applicability of time fractional derivative models for simulating the dynamics and mitigation scenarios of covid-19, Chaos, Solitons & Fractals 138 (2020) 109959. http://dx.doi.org/10.1016/j.chaos.2020.109959 doi:10.1016/j.chaos.2020.109959. xu2020forecast C. Xu, Y. Yu, Y. Chen, Z. Lu, Forecast analysis of the epidemics trend of covid-19 in the usa by a generalized fractional-order seir model, Nonlinear dynamics 101 (3) (2020) 1621–1634. http://dx.doi.org/10.1007/s11071-020-05946-3 doi:10.1007/s11071-020-05946-3. chang2019delay L. Chang, C. Liu, G. Sun, Z. Wang, Z. Jin, Delay-induced patterns in a predator–prey model on complex networks with diffusion, New Journal of Physics 21 (7) (2019) 073035. http://dx.doi.org/10.1088/1367-2630/ab3078 doi:10.1088/1367-2630/ab3078. gao2020cross S. Gao, L. Chang, X. Wang, C. Liu, X. Li, Z. Wang, Cross-diffusion on multiplex networks, New Journal of Physics 22 (5) (2020) 053047. http://dx.doi.org/10.1088/1367-2630/ab825e doi:10.1088/1367-2630/ab825e. du2013measuring M. Du, Z. Wang, H. Hu, Measuring memory with the order of fractional derivative, Scientific Reports 3 (1) (2013) 1–3. http://dx.doi.org/https://doi.org/10.1038/srep03431 doi:https://doi.org/10.1038/srep03431. djilali2020turing S. Djilali, B. Ghanbari, S. Bentout, A. Mezouaghi, Turing-hopf bifurcation in a diffusive mussel-algae model with time-fractional-order derivative, Chaos, Solitons & Fractals 138 (2020) 109954. http://dx.doi.org/https://doi.org/10.1016/j.chaos.2020.109954 doi:https://doi.org/10.1016/j.chaos.2020.109954. zhang2022impact N. Zhang, Y. Kao, B. Xie, Impact of fear effect and prey refuge on a fractional order prey–predator system with beddington–deangelis functional response, Chaos: An Interdisciplinary Journal of Nonlinear Science 32 (4) (2022) 043125. http://dx.doi.org/https://doi.org/10.1063/5.0082733 doi:https://doi.org/10.1063/5.0082733. zheng2022turing Q. Zheng, J. Shen, Y. Zhao, L. Zhou, L. Guan, Turing instability in the fractional-order system with random network, International Journal of Modern Physics B 36 (32) (2022) 2250234. http://dx.doi.org/https://doi.org/10.1142/S0217979222502344 doi:https://doi.org/10.1142/S0217979222502344. kuznetsov2022robust M. Kuznetsov, Robust controlled formation of turing patterns in three-component systems, Physical Review E 105 (1) (2022) 014209. http://dx.doi.org/10.1103/PhysRevE.105.014209 doi:10.1103/PhysRevE.105.014209. podlubny1999199 I. Podlubny, Fractional differential equations, Academic Press, New York, 1999. matignon1996stability D. Matignon, Stability results for fractional differential equations with applications to control processing, in: Computational engineering in systems applications, Vol. 2, Lille, France, 1996, pp. 963–968. deng2007stability W. Deng, C. Li, J. Lü, Stability analysis of linear fractional differential system with multiple time delays, Nonlinear Dynamics 48 (2007) 409–416. http://dx.doi.org/https://doi.org/10.1007/s11071-006-9094-0 doi:https://doi.org/10.1007/s11071-006-9094-0. gao2023turing S. Gao, L. Chang, M. Perc, Z. Wang, Turing patterns in simplicial complexes, Physical Review E 107 (1) (2023) 014216. http://dx.doi.org/10.1103/PhysRevE.107.014216 doi:10.1103/PhysRevE.107.014216.
http://arxiv.org/abs/2307.01191v1
20230703175428
Variational integrals on Hessian spaces: partial regularity for critical points
[ "Arunima Bhattacharya", "Anna Skorobogatova" ]
math.AP
[ "math.AP", "math.DG" ]
Variational integrals on Hessian spaces]Variational integrals on Hessian spaces: partial regularity for critical points Department of Mathematics, Phillips Hall, the University of North Carolina at Chapel Hill, NC arunimab@unc.edu Department of Mathematics, Fine Hall Princeton University, Princeton, NJ as110@princeton.edu We develop regularity theory for critical points of variational integrals defined on Hessian spaces of functions on open, bounded subdomains of ℝ^n, under compactly supported variations. We show that for smooth convex functionals, a W^2,∞ critical point with bounded Hessian is smooth provided that its Hessian has a small bounded mean oscillation (BMO). We deduce that the interior singular set of a critical point has Hausdorff dimension at most n-p_0, for some p_0 ∈ (2,3). We state some applications of our results to variational problems in Lagrangian geometry. Finally, we use the Hamiltonian stationary equation to demonstrate the importance of our assumption on the a priori regularity of the critical point. [ Anna Skorobogatova August 1, 2023 ======================= § INTRODUCTION We consider variational integrals of the form ∫_ΩF(D^2u)dx where Ω is an open, bounded subset of ℝ^n. Any critical point of the above functional under compactly supported variations satisfies an Euler-Lagrange equation, which possesses a nonlinear, fourth order, double divergence structure, given by ∫_Ω F^ij(D^2u)η_ijdx=0, where η∈ C_0^∞(Ω) is an arbitrary test function. We assume the functional in <ref> to be uniformly convex and smooth, which results in the above coefficient F^ij to possess the following properties: * It is smooth in the matrix entries D^2u over a given convex region U⊂ S^n× n, where S^n× n denotes the space of all n× n symmetric matrices. * It satisfies the Legendre ellipticity condition: there exists a constant Λ>0 such that ∂ F^ij/∂ u_kl(ξ)σ _ijσ_kl≥Λ|σ| ^2,∀σ∈ S^n× n, ∀ ξ∈ U. Hessian-dependent functionals appear in various areas of mathematics and related disciplines. Examples include functionals arising in elasticity theory and the mechanics of solids <cit.>, as well as the Aviles-Giga functional <cit.> that models phenomena from blistering to liquid crystals. Other equations that enjoy the fourth order structure are bi-harmonic functions, in addition to suitable perturbations of this, such as the conformally invariant Paneitz operator introduced in <cit.> and the wider class of operators studied in <cit.>. Further, well-known examples are extremal Kähler metrics, the Willmore surface equation, which is closely linked to elastic mechanics, and its generalization coming from the Canham-Helfrich energy <cit.>. For second order equations in divergence form (and their nonlinear counterparts), the existence and regularity theory of solutions has been studied extensively and plays a significant role in geometric analysis, among many other fields. In comparison, for fourth order, the theory of double divergence form equations is largely unexplored but remains an important developing area of geometric analysis. Before presenting the main result of this paper, we first introduce the following notation and definition. Notations. Throughout this paper, U denotes a convex neighborhood in S^n× n. B_r(x) denotes an open ball of radius r centered at x in ^n. When the center is 0, we often suppress the dependency on the center and simply write B_r. 𝕊^n-1 denotes the (n-1)-dimensional unit sphere embedded in ^n. At the risk of abusing notation, the Euclidean norm of vectors, the Hilbert-Schmidt norm of matrices, and the n-dimensional volume of subsets of ^n will all be denoted by |· |. We use ⟨·,·⟩ to denote the Hilbert-Schmidt inner product between matrices. We will use (u)_r to denote the average 1/|B_r|∫_B_r u of a given function u∈ L^1(B_r). Constants will be denoted by C_0, C_1, C_2,… with dependencies given when first introduced, and subsequently suppressed. Also, note that we are implicitly assuming that n≥ 2. For two sets Ω, Ω' ⊂^n, we write Ω' ⋐Ω if Ω' is compactly contained in Ω. We use 1_E to denote the indicator function on a set E. We use the Einstein summation notation We say that f:B_1 →^n× n has bounded mean oscillation (f∈(B_1)) with modulus ω > 0 in B_1 if for any ball B⊂ B_1 we have 1/|B|∫_B|f-(f)_B|≤ω where (f)_B denotes the average of f over the ball B. Note that via the John-Nirenberg inequality, condition (<ref>) implies that for any p ∈ (1,∞), 1/|B|∫_B|f-(f)_B|^p≤C̅ω for all balls B⊂ B_1, where C̅= C̅(n,p) > 0. Our main result is the following ε-regularity theorem: Suppose that u∈ W^2,∞(B_1) is a critical point of (<ref>) where F is smooth and uniformly convex or uniformly concave on U and D^2u(x) ∈ U for almost every x∈ B_1. There exists ω(Λ,n,D^2u_L^∞(B_1))>0 such that if D^2u∈(B_1) with modulus ω, then u is smooth in B_1/2. As a consequence, we obtain the following dimension estimate on the singular set for critical points of (<ref>). Suppose that u∈ W^2,∞(B_1) is a critical point of (<ref>) where F is smooth and uniformly convex or concave on U and D^2u(x) ∈ U for almost every x∈ B_1. Then there exists p_0 > 2 such that the following holds. Let Σ{x∈ B_1 : lim inf_r→ 01/r^n∫_B_r(x)|D^2 u - (D^2 u)_B_r(x)|^p_0 > 0 }, where (D^2 u)_B_r(x) denotes the average of D^2u over B_r(x). Then u∈ C^∞(B_1∖Σ) and _(Σ(u)) ≤ n-p_0. The gradient of any critical point of (<ref>) can be seen to solve a second order system, with the additional constraint that one seeks only solutions among conservative vector fields for such a system. Such a critical point is therefore significantly more restrictive than a general critical point for the second order system associated with its gradient (cf. Remark <ref>). In light of this, one may compare the conclusion of Corollary <ref> to <cit.>*Theorem 1.1, Theorem 8.1, where higher integrability is also exploited to get an improved dimension estimate on the singular set. Additionally, notice that in <cit.> the authors focus on local minimizers, while here we consider general critical points, which is possible by the additional structural assumption on the energy functional. Moreover, the dimension bound in <cit.>*Theorem 1.1 is weaker than that demonstrated here, since therein, one is not able to establish integrability for a full additional derivative of the minimizer u, but rather only a fractional derivative (cf. Theorem 4.2 therein). On the other hand, in <cit.>*Theorem 8.1, an analogous dimension estimate to (<ref>) is established for the complement of the set where the minimizer u is merely C^0,α. In general, I don't think there is a reason to expect the results that hold good for second order systems to be exactly true for a fourth order scalar equation. I agree with this, what I meant was more that the improved integrability holds for second order systems also, which should in turn give them this slightly sharper dimension estimate on the singular set there also (if the argument we have here is ok), despite the fact that it isn't written in Giaquinta-Martinazzi Let us point out some important observations regarding our main results. It might be possible to drop the a priori regularity assumption on the critical point u of (<ref>) below W^2,∞(B_1) (e.g. to W^2,2), but in general, such a relaxation seems unlikely without additional assumptions on u. See Section <ref> for a more detailed discussion. The exponent p_0 in corollary <ref> comes from the application of Proposition <ref>; it is the fourth order analog of the higher integrability exponent for solutions of second order divergence form elliptic PDEs. We do not expect the dimension estimate of Corollary <ref> to be sharp. Although critical points of (<ref>) can be seen to solve a second order system in their gradient, and solutions of second order elliptic systems may have singularities (cf. <cit.>*Section 9.1), we expect that the additional structure coming from the fourth order equation should give rise to improved regularity. We investigate this problem in forthcoming work <cit.>. In <cit.>, the first author together with Warren showed that if F in (<ref>) is a smooth convex function of the Hessian and can be expressed as a function of the square of the Hessian, then a C^2,α critical point (under compactly supported variations) of (<ref>) will be smooth. This was achieved by establishing regularity for a class of fourth order equations in the double divergence form, given by (<ref>). In <cit.>, Bhattacharya-Chen-Warren studied regularity for a certain class of fourth order equations in double divergence form that in turn led to proving smoothness for any C^1-regular Hamiltonian stationary Lagrangian submanifold in a symplectic manifold. More recently, in <cit.>, the first author considered variational integrals of the form (<ref>) where F is convex and smooth on the Hessian space and showed that a critical point u∈ W^2,∞ of such a functional under compactly supported variations is smooth if the Hessian of u has a small L^∞-oscillation. Theorem <ref> relaxes the a priori assumption of uniform smallness of oscillation of the Hessian to smallness in an L^p_0-mean sense, which in turn allows one to deduce the dimension estimate of Corollary <ref>. The BMO-smallness assumption on the Hessian can be compared with the smallness of the tilt-excess in Allard's regularity theorem <cit.>, which is a sufficient (and generally necessary) condition to establish local regularity of minimal surfaces. In this paper, the proof of the main result Theorem <ref> follows a similar strategy as the proof of the Evans-Krylov regularity estimate <cit.>, where the proof involves first establishing C^1,1 estimates, followed by C^2,α regularity: W^2,∞ leads to C^2,α estimates, from which smoothness follows from <cit.>. In <cit.>, the assumption on the smallness of the oscillation of the Hessian was key in proving smoothness, since it lead to controlling the oscillation of the leading coefficients of the fourth order equation by the small oscillation modulus of the Hessian of u. This small modulus was used to bound the L^2 norm of the Hessian of the difference quotient of u by a factor of r^n-2+2α on a ball of radius r. This in turn led to a uniform C^1,α bound on the difference quotient of u, thereby proving u∈ C^2,α which is sufficient to achieve smoothness (see <cit.>). Indeed, Hölder continuity of the Hessian leads to Hölder continuous coefficients, which leads to a self-improving solution. Then, using the convexity property of F, the regularity theory developed in <cit.> is applied to the critical point u to achieve its smoothness. Here, however, since our a priori assumption on the smallness of the oscillation of the Hessian of u is merely in a BMO sense, we are no longer able to control the L^∞-oscillation of the leading coefficients of our fourth order equation by the small oscillation modulus of the Hessian of u. As a consequence, after exploiting an integrability improvement (see Proposition <ref>), one of the main technical challenges that we face lies in controlling the ratio of the L^p norms of the Hessian on balls of varying sizes by the ratio of the radii of those balls. We get around this with a key technical tool in Lemma <ref>, which is an adaptation of <cit.>*Lemma 3.4 and allows one to suitably absorb error terms when getting a scaled estimate for the Hessian. With the help of this, we indeed achieve a delicate estimate for our ratio bound, which eventually leads to controlling the behavior of the leading coefficients. The dimension estimate of Corollary <ref> on the interior singular set follows as a simple consequence of Theorem <ref>, a standard covering procedure, and the higher integrability of u (see Proposition <ref>), since Σ(u) is defined precisely in terms of the failure of the smallness of the BMO-modulus for the Hessian of u. §.§ Applications. While we focus on our application to the fourth order Hamiltonian stationary equation in Section <ref>, let us mention a small subset of interesting geometric equations whose Euler-Lagrange equation is of the above fourth order form. Abreu's equation <cit.><cit.> and prescribed affine curvature equations <cit.> are derived from functionals of the form F(D^2u,Du,u)=∫_Ω( G( (u_ij)) -gu) dx (cf. <cit.> for progress on these equations). Interacting agent problems can be described in an optimal transport framework (cf. <cit.>) via F(D^2u,Du,u) =∫_Ωg̃( (u_ij)) dx+∫_ΩG(x) (u_ij)dx +∫_Ω∫_Ω W(x,y)(u_ij(y))(u_ij(x))dydx, where W is, for example, the integrand associated with the Wasserstein distance. Here, appropriate constraints are imposed on the functions G, g, g̃, G̃, and W, as well as on the domain Ω; we refer the reader to the references above for details. Differentiating these functionals will yield equations of the above fourth order form, which may satisfy the Legendre ellipticity condition in many cases. Therefore, our main result is applicable to a broad class of fourth order nonlinear elliptic equations that arise variationally in a variety of contexts. In <ref>, we demonstrate a specific application of our result to the Hamiltonian stationary equation (<ref>). Using standard techniques from the calculus of variations (namely, the Direct Method), one can establish the existence of a critical point of (<ref>) in the space 𝒜[g]={ u∈ W^2,2(B_1) : u=g on ∂ B_1 and Du=Dg on ∂ B_1 } for a given g ∈ W^3/2,2(∂ B_1). However, seeking critical points u of (<ref>) with general boundary values u=g on ∂ B_1, Du=f on ∂ B_1, for any given g∈ W^3/2,2(∂ B_1), f∈ W^1/2,2(∂ B_1), is a significantly more difficult problem, and in general remains open. The second order analog of (<ref>) arises by considering the Euler Lagrange formulation of variational integrals ∫_ΩF(Du)dx, defined on gradient spaces. It is worth noting that regularity for critical points of the above functional does not require any restrictions on the gradient due to the well-known De Giorgi-Nash-Moser theory. §.§ Structure of paper The organization of the paper is as follows: In Section 2, we state and prove preliminary results. In Section 3, we develop regularity theory for weak solutions of (<ref>) with small BMO-Hessian by first establishing a uniform C^2,α estimate for the solution. In Section 4, we prove our main result and its corollary. Finally, in Section 5, we state an application of our main result to the Hamiltonian stationary equation and use it to show the importance of the a priori W^2,∞ assumption on u in our main results. §.§ Acknowledgments The authors thank Camillo De Lellis for helpful comments. AB acknowledges the support of the AMS-Simons Travel Grant and funding provided by the Bill Guthridge Distinguished Professorship fund. AS acknowledges the support of the National Science Foundation through the grant FRG-1854147. § PRELIMINARIES We begin by collecting some important preliminary results for solutions of a constant coefficient double divergence equation. The first is a higher integrability analog of <cit.>*Theorem 2.1. Let p_0>2 and r> 0. Suppose that w∈ W^2,p_0(B_r) satisfies the uniformly elliptic constant coefficient equation ∫_B_r c_0^ij,klw_ijη_kldx =0 ∀η∈ C_0^∞(B_r(0)), where the coefficient tensor c_0 satisfies the Legendre ellipticity condition c_0^ij,klσ_ijσ_kl≥λ |σ|^2 ∀σ∈ S^n× n. Then there exist C_1=C_1(n,p_0,λ), C_2=C_2(n,p_0,λ) such that for any 0<ρ≤ r, we have ∫_B_ρ|D^2w|^p_0 ≤ C_1(ρ/r)^n∫_B_r|D^2w|^p_0 ∫_B_ρ|D^2w-(D^2w)_ρ|^p_0 ≤ C_2(ρ/r)^n+p_0 ∫_B_r|D^2w-(D^2w)_r|^p_0. We prove the desired estimates for w in place of D^2 w; the conclusion follows immediately since whenever w solves (<ref>), so do all of its higher order derivatives. By dilation, we may consider r=1. We restrict our consideration to the range ρ∈(0,1/4] noting that the statement is trivial for ρ∈ 1/4,1]. First, we note that w is smooth (cf. <cit.>). Recall <cit.>, which tells us that for a uniformly elliptic 4th order operator L_0, L_0w =0 in B_R ‖ Dw‖ _L^∞(B_R/4)≤ C_3 (λ,n,p_0,R)‖ w‖ _L^p_0(B_R). In particular, we have ‖ Dw‖ _L^∞(B_1/4)^p_0≤ C_4(λ ,n,p_0)‖ w‖ _L^∞(B_1)^p_0. Therefore ‖ w‖_L^p_0(B_ρ)^p_0 ≤ C_5(n)ρ ^n‖ w‖ _L^∞(B_1/4)^p_0 =C_5ρ^ninf_x∈ B_1/4sup_y∈ B_1/4| w(x)+w(y)-w(x)| ^p_0 ≤ C_5ρ^ninf_x∈ B_1/4[ | w(x)|+1/2‖ Dw‖ _L^∞(B_1/4)] ^p_0 ≤ C_6(n,p_0)ρ^n[ inf_x∈ B_1/4| w(x)| ^2+‖ Dw‖ _L^∞(B_1/4)^p_0] ≤ C_6ρ^n[ 1/|B_1/4|w_L^p_0(B_1/4) ^p_0+C_4w_L^p_0(B_1/4)^p_0] ≤ C_7(n,p_0,λ)ρ^nw_L^p_0(B_1)^p_0. Similarly, ∫_B_ρ| w-(w)_ρ| ^p_0 ≤∫_B_ρ| w-w(0)| ^p_0 ≤∫_0^ρ∫_𝕊^n-1τ^p_0| Dw| ^p_0τ^n-1dτ dϕ =C_8(n,p_0,λ)ρ^n+p_0 Dw _L^∞(B_1/4)^p_0. The estimate (<ref>) then yields ∫_B_ρ| w-(w)_ρ| ^p_0≤ C_8 C_4ρ^n+p_0∫_B_1| w| ^p_0. Now we may apply the above to D^2 w in place of w, which is possible since all second order derivatives of w also solve (<ref>), as previously mentioned, to give ∫_B_ρ| D^2 w-(D^2 w)_ρ| ^p_0≤ C_8 C_4ρ^n+p_0∫_B_1| D^2 w| ^p_0. Next, observe that (<ref>) is purely fourth order, so the equation still holds when a second order polynomial is added to the solution. In particular, we may choose w̅(x) w(x) - ⟨(D^2w)_1, x⊗ x⟩, so that D^2w̅=D^2w-( D^2w) _1 also satisfies the equation. Then D^3w̅=D^3w and so by the Poincaré inequality (<ref>), we have ∫_B_1| D^2w̅| ^p_0= ∫_B_1| D^2w-( D^2w) _1| ^p_0. We conclude from (<ref>), (<ref>) and (<ref>) that ∫_B_ρ| D^2w-(D^2w)_ρ| ^p_0≤ C_8 C_4 ρ^n+2∫_B_1| D^2w-( D^2w) _1| ^p_0. The next result is a simple consequence of Theorem <ref> (cf. <cit.>*Corollary 2.2 for the p=2 analog). Suppose that w, p_0, and r are as in Theorem <ref>, and let C_1, C_2 be the constants therein. Then there exists C_0=C_0(n,p_0) such that for any u∈ W^2,p_0(B_r), and any  0<ρ≤ r, there hold the following two inequalities ∫_B_ρ| D^2u| ^p_0 ≤ C_0 C_1(ρ/r)^n ∫_B_r| D^2u| ^p_0 +( C_0 +C_0^2 C_1) ∫_B_r| D^2(u-w)| ^p_0 , ∫_B_ρ| D^2u-(D^2u)_ρ| ^p_0 ≤ C_0^2 C_2(ρ/r)^n+p_0∫_B_r| D^2u-(D^2u)_r| ^p_0 +( 2C_0^2 +2C_0^3 C_2) ∫_B_r| D^2(u-w)| ^p_0. Let v=u-w. Then (<ref>) follows from direct computation and Theorem <ref>: ∫_Bρ|D^2u|^p_0 ≤ C_0(n,p_0)[∫_B_ρ| D^2w|^p_0+∫_B_ρ |D^2v|^p_0] ≤ C_0 C_1(ρ/r)^n∫_B_r| D^2w|^p_0+C_0∫_B_r |D^2v|^p_0 ≤ C_0^2 C_1(ρ/r)^n[ D^2v_L^p_0(B_r)^p_0+D^2 u_L^p_0(B_r)^p_0] +C_0∫_B_r|D^2v|^p_0 =C_0^2 C_1(ρ/r)^n‖ D^2u‖ _L^p_0(B_r) ^p_0+(C_0^2+C_0 C_1(ρ/r)^n)‖ D^2v‖ _L^p_0(B_r)^p_0.   Similarly ∫_Bρ| D^2u-(D^2u)_ρ| ^p_0 ≤ C_0∫_B_ρ| D^2w-(D^2w)_ρ| ^p_0 +C_0∫_B_ρ| D^2v-(D^2v)_ρ| ^p_0 ≤ C_0∫_B_ρ| D^2w-(D^2w)_ρ| ^p_0 +2C_0^2∫_B_ρ| D^2v| ^p_0 ≤ C_0 C_2(ρ/r)^n+p_0∫_B_r|D^2w-(D^2w)_r|^p_0 +2C_0^2∫_B_ρ| D^2v| ^p_0 ≤ C_0 C_2(ρ/r)^n+p_0{[ C_0∫_B_r| D^2u-(D^2u)_r| ^p_0; +C_0∫_B_r| D^2v-(D^2v)_r| ^p_0 ]} +2C_0^2∫_B_r| D^2v| ^p_0 ≤ C_0^2 C_2(ρ/r)^n+p_0∫_B_r| D^2u-(D^2u)_r | ^p_0 +( 2C_0^2 +2 C_0^3 C_2(ρ/r)^n+p_0) ∫_B_r| D^2v| ^p_0. The statement follows, noting that ρ/r≤1. We conclude this section with the following technical lemma, which is a modified version of <cit.> and <cit.>. This modified technical tool is crucial for the proof of Theorem <ref>. <cit.>. Let ϕ be a nonnegative and nondecreasing function on [0,R]. Then there exists θ=θ(A,γ,κ) ∈ (0,1) such that the following holds. Suppose that ϕ(τ)≤ A[ ( τ/r) ^κ +ε] ϕ(r)+Br^β for any 0<τ≤θ r≤θ R with A,B,β,κ nonnegative constants and β<κ. Then for any γ∈(β,κ), there exists ε_0=ε_0(A,κ,β,γ) such that if ε<ε_0 we have the following for all 0<τ≤θ r≤θ R ϕ(τ)≤ c[ ( τ/r) ^γ ϕ(r)+Bτ^β], where c is a positive constant depending on A,κ,β,γ. In particular, for any 0<τ≤θ R we have ϕ(τ)≤ c[ ϕ(R)/R^γτ^γ+Bτ^β] . Moreover, if β = 0, one may explicitly choose θ = (2A)^-2/κ. We omit the proof of Lemma <ref> here, and simply refer the reader to <cit.> or <cit.>. Notice that the choice of scale τ such that 2Aτ^κ = τ^γ in the proof therein guarantees the final conclusion of the lemma. § REGULARITY THEORY FOR SMALL BMO-HESSIAN In this section, we develop a regularity theory for weak solutions u of the following fourth order equation in the double divergence form: ∫_Ωa^ij,kl(D^2u)u_ijη_kldx=0,∀η∈ C_0^∞(Ω), where Ω⊂^n is an open set. Denoting h_m=he_m, we start by introducing the following definition. We define equation (<ref>) to be regular on U when the following conditions are satisfied on U: (i) The coefficients a^ij,kl depend smoothly on D^2u. (ii) The linearization of (<ref>) is uniformly elliptic, namely, the leading coefficient of the linearized equation given by b^ij,kl(D^2u(x))=∫_0^1∂/∂u_ij[a^pq,kl(D^2u(x)+t [D^2u(x+h_m)-D^2u(x)])u_pq(x)]dt satisfies the standard Legendre ellipticity condition: b^ij,kl(ξ)σ_ijσ_kl≥Λ|σ|^2 ∀σ∈ S^n× n, ∀ξ∈ U. Observe that (<ref>) is indeed regular in the sense of <cit.> since for a uniformly continuous Hessian, the coefficient given by (<ref>) takes the form of the b^ij,kl coefficient shown in <cit.> as h→ 0: b^ij,kl(D^2u(x))=a^ij,kl(D^2u(x))+∂ a^pq,kl/∂ u_ij(D^2u(x))u_pq(x) . Note that for any Ω' ⊂Ω, we have b^ij,kl_L^∞(Ω')≤ C(D^2 u_L^∞(Ω')). This is the only obstruction to removing dependency on D^2 u_L^∞ in the constants in our techniques. Indeed, if it were possible to successfully remove such a dependency, we could relax our a priori regularity assumption on u to be merely W^2,2. However, the example of the Hamiltonian stationary equation and the uniform convexity hypothesis together suggest that such a relaxation would be difficult, if at all possible; see Section <ref>. Before we proceed, let us make the following important observation, which will be crucial in the arguments that follow. For m=1,…,n, and a given function g, let g^h_m denote the difference quotient g(x+h_m) - g(x)/h in the direction e_m. Let Ω' ⋐Ω. For h≤ d(Ω',∂Ω) we may take a single difference quotient in (<ref>), which gives ∫_Ω'[a^ij,kl(D^2u)u_ij]^h_mη_kldx=0. We write the first difference quotient as a^ij,kl(D^2u)u_ij]^h_m(x) =a^ij,kl(D^2 u(x+h_m))u_ij(x+h_m)-u_ij(x)/h +1/h[ a^ij,kl(D^2u(x+h_m))-a^ij,kl(D^2 u(x))] u_ij(x) =a^ij,kl(D^2u(x+h_m))u^h_m_ij(x) +[ ∫_0^1∂ a^ij,kl/∂ u_pq (tD^2u(x+h_m)+(1-t)D^2u(x))dt]u_ij(x) u^h_m_pq(x). Then, letting f=u^h_m for h sufficiently small, equation (<ref>) satisfied by u, and definition (<ref>) of the linearized coefficient b^ij,kl yields ∫_Ω' b^ij,kl f_ijη_kl dx = 0 ∀η∈ C_0^∞(Ω'), where we suppress the dependency of b^ij,kl on D^2 u in order to simplify notation. In the remainder of this section, we assume that B_1⊂Ω, which we may do without loss of generality, by rescaling and translation. Let us first present the following proposition, which will prove to be a key tool throughout. It is a fourth order analog of a standard application of the Giaquinta-Modica variant of Gehring's Lemma <cit.>. There exists p̅∈ [1,2) such that the following holds. Let w be as in Theorem <ref>, let u∈ W^2,∞(B_1) be a weak solution of the regular equation (<ref>) on B_1 with D^2u(x)∈ U for almost-every x∈ B_1, and let v=f-w for f=u^h_m. Then D^2v∈ L^p_0_loc(B_1) and there exists C=C(n,|c_0|,b^ij,kl_L^∞(U)) such that for each x_0∈ B_1 and each 0< s < 1/2dist(x_0,∂ B_1), we have (1/|B_s|∫_B_s(x_0)|D^2v|^2 dx)^1/2≤ C(1/|B_2s|∫_B_2s(x_0)|D^2v|^p̅ dx)^1/p̅. In particular, there exists p_0∈ (2,∞) such that for any r≤ 1 and ρ∈ (0, r), there exists C̅=C̅(n,Λ, p_0, |c_0|,b^ij,kl_L^∞(U), r-ρ) such that (1/|B_ρ|∫_B_ρ|D^2v|^p_0 dx)^1/p_0≤C̅(1/|B_r|∫_B_r|D^2v|^2 dx)^1/2. It will later be necessary to know the behavior of the constant C̅ with r-ρ in the above proposition. Observe that lim_r↑ρC̅ = + ∞. By adding a suitable constant, we may without loss of generality assume that |v|≥ 1 a.e. inside B_1. First of all, we prove (<ref>). Fix x_0, s as in the statement of the proposition. In light of (<ref>), we have ∫_B_2s(x_0)b^ij,klv_ijη_kldx=∫_B_2s(x_0) b^ij,kl w_ijη_kldx. Let τ∈ C_c^∞( B_2s(x_0)) be a positive cut-off function that takes the value 1 on B_s(x_0) and vanishes outside B_2s(x). Note that |D^kτ| ≤ C(n)s^-k for each k∈ℕ. We choose η=τ^4v. Expanding derivatives we get ∫_B_2s(x_0) b^ij,klv_ijτ^4v_kldx = ∫_B_2s(x_0) b^ij,klw_ij (τ^4v)_kldx - ∫_B_2s(x_0) b^ij,klv_ij[ (τ^4) _klv + (τ^4)_lv_k+ (τ^4) _k v_l] dx. By our assumption in (<ref>) b^ij,kl is uniformly elliptic on U. Therefore, we get ∫_B_2s(x_0)τ^4Λ|D^2v|^2 dx ≤ C(n)s^-2∫_B_2s(x_0)| b^ij,kl|| v_ij|τ^2( |v|+|Dv|) dx +Cs^-2∫_B_2s(x_0)| b^ij,kl|| w_ij|τ^2( |v|+|Dv|+|D^2v|) dx ≤ C(n,b ^ij,kl_L^∞(U))s^-2∫_B_2s(x_0)( ετ^4|D^2v|^2+1/ε(|v|+|Dv|)^2) dx +C(n,b^ij,kl_L^∞(U),D^2w_L^∞(B_1))s^-2∫_B_2s(x_0)( ετ^2|D^2v|^2+1/ετ^2) dx +C(n,b^ij,kl_L^∞(U),D^2w_L^∞(B_1))s^-2∫_B_2s(x_0)(|v|^2+|Dv|^2)dx. Note that since w satisfies the constant coefficient equation (<ref>), D^2w_L^∞(B_1) is bounded by C(n,|c_0|) (see e.g. <cit.>). Now we may choose ε=ε(n,s,|c_0|,b^ij,kl_L^∞(U),Λ)>0 sufficiently small such that ∫_B_2s(x_0)τ^4Λ|D^2v|^2 dx ≤1/2∫_B_2s(x_0)τ^2 Λ|D^2v|^2dx +Cs^-2∫_B_2s(x_0)(|v|+|Dv|)^2 dx +Cs^-2∫_B_2s(x_0)1/ετ^2 dx. Rearranging, and recalling that 0≤τ≤ 1≤ |v| a.e. in B_1 and τ^4 ≥1_B_s(x_0), we get ∫_B_s(x_0)|D^2v|^2dx ≤ Cs^-2∫_B_2s(x_0)(|v|^2+|Dv|^2)dx ≤ C s^-2[‖ Dv‖_L^2_*(B_2s(x_0))^2+D^2v_L^2_*(B_2s(x_0))^2] ≤ Cs^-2(∫_B_2s(x_0)|D^2v|^2_*dx)^2/2_*, and thus 1/|B_s|∫_B_s(x_0)|D^2v|^2dx ≤ C(1/|B_s|∫_B_2s(x_0)|D^2v|^2_*dx)^2/2_*, where C=C(n,|c_0|,b^ij,kl_L^∞(U),Λ) and 2_*=2n/n+2 is the exponent such that (2_*)^* = 2, where p^* denotes the Sobolev dual exponent of p. The last two inequalities follow from the Gagliardo-Nirenberg inequality for bounded domains <cit.>. This proves the desired estimate in (<ref>), with p̅=2_*. It remains to prove (<ref>). This, however, follows from the Giaquinta-Modica variant of Gehring's Lemma <cit.>*Proposition 5.1, applied to |D^2v|^2_* with q=2. Note that one may replace the cubes Q_2R and Q_R therein with the balls B_r and B_ρ respectively (at the price of making the constant additionally dependent on r-ρ) via an analogous argument with a Calderón-Zygmund decomposition of B_r into cubes degenerating towards ∂ B_r. We omit the details here since the argument is standard. Observe that for the above proposition to hold true, we did not require v, Dv to identically vanish on the boundary of B_r (since this was achieved via the choice of cut-off). In particular, notice that the conclusion of Proposition <ref> holds also with f=u^h_m in place of v. Suppose that u∈ W^2,∞(B_1) is a weak solution of the regular equation (<ref>) on B_1 with D^2u(x)∈ U for almost-every x∈ B_1. Then for any compactly contained open subset Ω' of B_1, u∈ W^3,p_0(Ω'), where p_0 is as in Proposition <ref>, and satisfies the following estimate u_W^3,p_0(Ω')≤ C(Λ,b^ij,kl_L^∞(U), d(Ω',∂ B_1)) u_W^2,2(B_1), where d(Ω,∂ B_1)) denotes the distance inf_x∈Ωdist(x,∂ B_1). Let τ∈ C_c^∞( B_1) be a cutoff function in B_1 that takes the value 1 on Ω'. Fix m∈{1,…,n} arbitrarily and let η = τ^4 f in (<ref>), for f=u^h_m. This yields ∫_B_1b^ij,klf_ij[τ^4f]_kldx=0, where we are suppressing the dependency of b^ij,kl on D^2 u, to simplify notation. Expanding derivatives, we get ∫_B_1 b^ij,klf_ijτ^4f_kldx=-∫_B_1 b^ij,klf_ij( (τ^4) _klf+ (τ^4) _lf_k+ (τ^4) _k f_l) dx. Arguing as in the proof of Proposition <ref>, we get ∫_B_1τ^4Λ|D^2f|^2 dx≤ C(τ,Dτ,D^2τ) ∫_B_1| b^ij,kl|| f_ij|τ^2( 1+|f|+|Df|) dx ≤ C(n,d(Ω',∂ B_1))b^ij,kl_L^∞(U)∫_B_1( ετ^4|D^2f|^2+C1/ε(1+|f|+|Df|)^2) dx. Choosing ε>0 appropriately, rearranging, using the definition of τ, and the uniform ellipticity of the coefficient b^ij,kl on U, we get ∫_Ω'|D^2f|^2dx≤ C'∫_B_1(1+|f|+|Df|)^2dx, where C'=C'(n,Λ,b^ij,kl_L^∞(U), d(Ω',∂ B_1)). Now this estimate is uniform over all h ≤ d(Ω',∂ B_1) and all directions e_m, m=1,…,n, and so we conclude that u∈ W^3,2(Ω'), with the estimate u_W^3,2(Ω')≤ C(n,Λ,b^ij,kl_L^∞(U), d(Ω',∂ B_1))‖ u ‖_W^2,2(B_1). In view of Remark <ref> and Remark <ref>, the proof of the desired estimate follows verbatim from the proof of Proposition <ref> by replacing v with f=u^h_m. We are now in a position to prove the main result of this section. Suppose that u∈ W^2,∞(B_1) is a weak solution of the regular equation (<ref>) on B_1 with D^2u(x) ∈ U for almost-every x∈ B_1. Let α∈ (0,1). There exists ω(Λ,n, α, D^2u_L^∞(B_1))>0 such that if D^2u∈(B_1) with modulus ω, then D^2u∈ C^0,α(B_3/4) and satisfies the following estimate D^2u_C^0,α(B_3/4)≤ C(α,n,Λ,D^2 u_L^∞(B_1)). Let m∈{1,…,n} be arbitrary and let f=u^h_m be defined in B_3/4, as before, for h≤1/4. Recall equation (<ref>) satisfied by f. We pick an arbitrary point x_0 inside B_3/4 and consider the ball B_r(x_0), for a fixed scale r≤1/4. To simplify notation, we will simply denote this ball by B_r for the rest of this proof. Let w solve the following boundary value problem: ∫_B_r(b^ij,kl)_rw_ijη_kldx =0,∀η∈ C_0^∞(B_r) w =f on ∂ B_r D w =D f on ∂ B_r. This is a constant coefficient PDE with the given boundary conditions and therefore has a unique solution w∈ W^2,2(B_r) that is smooth on the interior of B_r (<cit.>). Letting v=f-w, observe that v∈ W^2,2_0(B_r) and so may be used as a test function in (<ref>) and (<ref>). Thus, we see that ∫_B_r (b^ij,kl)_r v_ij v_kl = ∫_B_r [(b^ij,kl)_r - b^ij,kl(x)]f_ij v_kl. By Cauchy-Schwartz, followed by Hölder's inequality, we have ∫_B_r (b^ij,kl)_r v_ij v_kl≤(b^ij,kl)_r - b^ij,kl_L^2(B_r)D^2 v_L^2p'(B_r)D^2 f_L^2p(B_r). The ellipticity of b^ij,kl thus gives Λ∫_B_r|D^2 v|^2 ≤(b^ij,kl)_r - b^ij,kl_L^2(B_r)D^2 v_L^2p'(B_r)D^2 f_L^2p(B_r), where p' is the Hölder dual of p. By Proposition <ref>, for some p_0 > 2 and the constant C̅ therein, we have [1/|B_ρ|∫_B_ρ|D^2 v|^p_0]^1/p_0≤C̅[1/|B_r|∫_B_r|D^2 v|^2]^1/2, for any ρ∈ (0,r). Taking p=p_0/2 > 1 in (<ref>) gives |B_r|^1-2/p_0 [∫_B_ρ|D^2 v|^p_0]^2/p_0 ≤ |B_ρ|^-2/p_0|B_r| [∫_B_ρ|D^2 v|^p_0]^2/p_0 ≤C̅Λ^-1(b^ij,kl)_r - b^ij,kl_L^2(B_r)D^2 v_L^q_0(B_r)D^2 f_L^p_0(B_r), where q_0=2(p_0/2)' = 2p_0/p_0-2. Now we have (b^ij,kl)_r - b^ij,kl_L^2(B_r) = 1/|B_r|[∫_B_r|∫_B_r b^ij,kl(D^2 u(y)) - b^ij,kl(D^2 u(x)) dy|^2 dx ]^1/2 ≤b^ij,kl_Lip(U)/|B_r|[∫_B_r|∫_B_r|D^2 u(y) - (D^2 u)_r| + |(D^2 u)_r - D^2 u(x)| dy|^2 dx ]^1/2 = b^ij,kl_Lip(U)/|B_r|[∫_B_r|∫_B_r|D^2 u(y) - (D^2 u)_r|dy + |B_r||(D^2 u)_r - D^2 u(x)||^2 dx ]^1/2 ≤ Cb^ij,kl_Lip(U)[ ∫_B_r|D^2 u(y) - (D^2 u)_r|^2dy + ∫_B_r|(D^2 u)_r - D^2 u(x)|^2 dx ]^1/2 ≤ Cb^ij,kl_Lip(U)(D^2 u)_r - D^2 u_L^2(B_r), for some C=C(n)>0. Combining the above estimates, we arrive at D^2 v_L^p_0(B_ρ)^2p_0 ≤ CC̅Λ^-1r^-np_0+2n(D^2 u)_r - D^2 u_L^2(B_r)^p_0D^2 v_L^q_0(B_r)^p_0D^2 f_L^p_0(B_r)^p_0. Rearranging, we arrive at D^2 v_L^p_0(B_ρ)^p_0 ≤ CC̅Λ^-1 r^-np_0+2n(D^2 u)_r - D^2 u_L^2(B_r)^p_0D^2v^p_0_L^q_0(B_r)/D^2v^p_0_L^p_0(B_ρ)D^2 f_L^p_0(B_r)^p_0. By Corollary <ref> (absorbing all constants into a single constant), for any τ∈(0,ρ) we thus have D^2 f^p_0_L^p_0(B_τ)≤ C^†(τ/ρ)^nD^2 f^p_0_L^p_0(B_ρ) +C^† r^-np_0+2n(D^2 u)_r - D^2 u_L^2(B_r)^p_0D^2 v^p_0_L^q_0(B_r)/D^2 v^p_0_L^p_0(B_ρ)D^2 f_L^p_0(B_r)^p_0 ≤ C^†(τ/ρ)^nD^2 f^p_0_L^p_0(B_r) ≤C̃^†(τ/ρ)^nD^2 f^p_0_L^p_0(B_r) +C̃^†ω^p_0D^2 v^p_0_L^q_0(B_r)/D^2 v^p_0_L^p_0(B_ρ)D^2 f_L^p_0(B_r)^p_0, where C^† and C̃^† are positive constants depending on n,Λ,p_0,D^2 u_L^∞(B_1) and r-ρ. Now, we wish to apply Lemma <ref>, with the choices of parameters ϕ(τ) =∫_B_τ| Df| ^p_0dx A = C̃^† κ =n β =0, B=0 γ =n-p_0+p_0α∈ (0,κ), R =1/4, for α∈ (0,1), provided that ω is chosen to be sufficiently small. Indeed, this can be ensured, provided that we show the following. Let θ be as in Lemma <ref>, for the above choice of parameters. There exists C^*=C^*(n,p_0,θ) such that for any ρ∈(θ r, r), we have D^2 v^p_0_L^q_0(B_r)/D^2 v^p_0_L^p_0(B_ρ)≤ C^*. Proof of Claim <ref>. First of all, let us write D^2 v^p_0_L^q_0(B_ρ) = D^2 v ·1_B_ρ^p_0_L^q_0(B_r). Now, suppose for a contradiction, that the claim is false. Then we can extract a sequence of scales ρ_k → r such that D^2 v^p_0_L^q_0(B_r)/D^2 v ·1_B_ρ_k^p_0_L^q_0(B_r)→∞ as k→∞. However, clearly 1_B_ρ_k→1_B_r strongly in L^q_0, which in turn implies that D^2 v ·1_B_ρ_k^p_0_L^q_0(B_r)→D^2 v^p_0_L^q_0(B_r), thus contradicting (<ref>). Now that we have proved Claim <ref>, we would like to fix ρ∈ (θ r, r), in order to apply Lemma <ref>. However, in order to do this we must deal with one delicate point; the constant A in our above choice of parameters depends on r-ρ, which in turn results in the dependency of θ on r-ρ. Thus, we need to check that it is indeed possible to make a choice of ρ∈ (θ r, r) as in the statement of Claim <ref>. In light of Remark <ref> and the fact that we may choose θ=(2A)^-2/α as stated in Lemma <ref>, we may deduce that lim_ρ↑ rθ = 0. Thus, it is indeed possible to choose ρ sufficiently close to r such that ρ > θ r. Fixing such a ρ and applying Claim <ref>, the coefficient of the second term on the right-hand side of (<ref>) may now be bounded above by C̃=(n,Λ,p_0,D^2 u_L^∞(B_1))ω^p_0. Thus, choosing ω < (C̃ε_0)^1/p_0 (dependent therefore on n,Λ,p_0,D^2 u_L^∞(B_1)), where ε_0 is as in Lemma <ref>, we conclude that ∫_B_τ|Df|^p_0 dx≤ Cτ^n-p_0+p_0α∫_B_1/4|Df|^p_0 dx, where C=C(Λ,n,α,p_0)>0. Note that the range of τ may easily be increased from (0,θ/4) to (0,1/4), up to increasing the constant c in Lemma <ref>, since B=0 and our choice of ϕ is monotone non-decreasing. Since we chose an arbitrary point x_0∈ B_3/4 and τ≤1/4, applying Morrey's Lemma <cit.> to f we get |u^h_m|_C^0,α(B_τ(x_0))≤ C(α, n,Λ,p_0, u_W^3,p_0(B_1/2)), which combined with estimate (<ref>) gives the desired estimate (<ref>). Since p_0 is a fixed parameter, determined by Proposition <ref>, this proves the proposition. We conclude this section with the following immediate consequence of Theorem <ref> combined with the interior estimates established in <cit.>. Suppose that u∈ W^2,∞(B_1) is a weak solution of the regular equation (<ref>) on B_1 with D^2u(x)∈ U for almost-every x∈ B_1. There exists ω(Λ,n,D^2u_L^∞(B_1))>0 such that if D^2u∈(B_1) with modulus ω, then u is smooth in B_1/2. From Proposition <ref> it follows that u∈ C^2,α(B_3/4). Then smoothness follows from <cit.>. § PROOFS OF THE MAIN RESULTS In this section, we use the results in Section 3 to prove Theorem <ref> and Corollary <ref>. §.§ Proof of Theorem <ref> Using the results of the previous section, in particular, Corollary <ref>, the proof of Theorem <ref> now follows verbatim from <cit.>. At the risk of being repetitive, we present the proof below for the sake of completion. We first wish to demonstrate that the result in Theorem <ref> (and therefore Corollary <ref>) is not restricted to solutions of equations of the form (<ref>), and in fact also applies to solutions of equations of the form (<ref>) for coefficients F^ij that are smoothly dependent on the Hessian, as long as uniform ellipticity of its linearization (condition (<ref>)) is maintained. In other words, we require the existence of a constant Λ > 0 such that ∂ F^ij/∂ u_kl(ξ)σ_ijσ_kl≥Λ|σ| ^2,∀σ∈ S^n× n, ∀ ξ∈ U. Arguing analogously to that in the preceding section (cf. (<ref>)), one can check that the above observation is true by using (<ref>) with a difference quotient test function (in the direction e_m) in order to derive the equation ∫_Ω'β^ij,klu^h_m_ijη_kl dx=0 ∀η∈ C_0^∞(Ω') where Ω'⋐Ω, h≤ d(Ω',∂Ω) and β^ij,kl(D^2u(x))=∫_0^1∂ F^ij/∂ u_kl(D^2u(x)+t[D^2u(x+h_m)-D^2u(x)])dt. This shows that the difference quotient of the solution of (<ref>) satisfies an equation of the form (<ref>), which was previously derived from (<ref>). Since the equation that we work with is the one satisfied by the difference quotient, the conclusion of Theorem <ref>, and thus also Corollary <ref> holds good for solutions of (<ref>). We are now in a position to conclude the validity of Theorem <ref> from this. Observe that any critical point of (<ref>) solves (<ref>) with F^ij = ∂ F(D^2u)/∂ u_ij. In light of the above discussion, we may apply Corollary <ref>. If F is uniformly convex, clearly this choice of F^ij satisfies condition (<ref>). If, on the other hand, F is uniformly concave, we may simply replace F with -F. of corollary (<ref>) When u∈ W^2,∞(B_1), (<ref>) becomes a regular equation. This follows from <cit.>. We add the proof here for the convenience of the readers. Remaining: write it. Then the result follows from Theorem <ref>. We are now ready to prove the dimension estimate on the singular set in Corollary <ref>. §.§ Proof of Corollary <ref> Let B_r(x) ⊂ B_1. By the Poincaré inequality, we have 1/r^n∫_B_r(x)|D^2 u - (D^2 u)_B_r(x)|^p_0≤1/r^n-p_0∫_B_r(x)|D^3 u|^p_0. Thus, for every x∈Σ, we have lim inf_r→ 01/r^n-p_0∫_B_r(x)|D^3 u|^p_0 > 0. Now, by Proposition <ref>, we know that u∈ W^3,p_0_loc(B_1). Thus, we may apply <cit.>*Proposition 9.21 to conclude the desired dimension estimate. § HAMILTONIAN STATIONARY EQUATIONS Hamiltonian stationary Lagrangian submanifolds of the complex Euclidean space are critical points of the volume functional under Hamiltonian variations, and locally they are governed by a fourth order nonlinear elliptic equation, given by Δ_g Θ=0, where Δ_g is the Laplace-Beltrami operator on the Lagrangian graph L_u=(x,Du). The function Θ is called the Lagrangian phase or angle of the surface L_u and is defined by Θ=∑_i=1^n arctanλ_i where λ_i are the eigenvalues of the Hessian D^2u. Let us describe the analytic setup of the geometric variational problem. For a fixed bounded domain Ω⊂ℝ^n, let u:Ω→ℝ be a smooth function. The gradient graph L_u={(x,Du(x)): x∈Ω} is a Lagrangian n-dimensional submanifold in ℂ^n, with respect to the complex structure J defined by the complex co-ordinates z_j=x_j+i y_j for j=1,...,n. The volume functional on L_u is given by ∫_Ω√((I_n+( D^2u) ^2))dx. A function u∈ W^2,∞(Ω) is a critical point of this functional under compactly supported variations if and only if u satisfies the Euler-Lagrange equation ∫_Ω√( g)g^ijδ^klu_ikη_jldx=0 ∀η∈ C_0^∞(Ω) where g=I_n+( D^2u) ^2 is the induced metric from the Euclidean metric in ℂ^n. This is also known as the variational Hamiltonian stationary equation. If the Hessian of the potential u is bounded by a small dimensional constant, then the variational equation (<ref>) is equivalent to the geometric Hamiltonian stationary equation (<ref>) <cit.> (also see <cit.>). In ℂ^n with the standard Kähler structure, the above expression for Θ is available for a local graphical representation L_u as above. This decomposition feature of the fourth order elliptic operator into a composition of two second order elliptic operators as in (<ref>) is essential in the work of Chen-Warren <cit.>, in which it is shown that a C^1-regular Hamiltonian stationary Lagrangian submanifold in ℂ^n is real analytic. However, it is rather difficult to apply the same strategy on a Calabi-Yau manifold other than ℂ^n. Even while Θ is still locally well-defined on Ω, it can no longer be written in a clean form as a sum of arctangent functions, even when representing L_u as a gradient graph in a Darboux coordinate chart. In <cit.>, the authors found that directly dealing with a critical point of the volume of L_u in an open ball B⊂ℝ^2n equipped with a Riemannian metric, among nearby competing gradient graphs L_u_t={(x, Du(x) +t Dη(x)):x∈Ω} for compactly supported smooth functions η is helpful. This leads the authors to study a class of fourth order nonlinear equations (cf. <cit.>*(1.1)) sharing a similar structure with (<ref>). It is worth noting that in general, equations of the form (<ref>) do not necessarily factor into a composition of second order operators, thus motivating the study of such fourth order equations. As an application of the main result in Theorem <ref>, we state the following corollary: Let η∈ (0,1). Suppose that u∈ W^2,∞(B_1) is a critical point of (<ref>) in B_1 and D^2u_L^∞(B_1)≤ 1-η. There exists ω(n,η)>0 such that if D^2u∈(B_1) with modulus ω, then u is smooth in B_1/2 with interior Hölder estimates of all orders. When D^2u_L^∞(B_1)≤ 1-η, the area functional (<ref>) is uniformly convex. The result follows immediately from Theorem <ref>. Suppose that the Hessian D^2u is diagonalized at x_0, with eigenvalues λ_1,…,λ_n. We may thus write the area integrand at x_0 as V(D^2 u(x_0)) ≡ V(λ_1,…,λ_n)=[(1+λ_1^2)...(1+λ_n^2)]^1/2. We compute ∂_i V =λ_i/1+λ_i^2V ∂_ii V =1/(1+λ_i^2)^2V ∂_ijV =λ_jλ_i/(1+λ_j^2)(1+λ_i^2)V for i≠ j. Denoting e_i=λ_i/1+λ_i^2, we re-write ∂_ii V=(1/1+λ_i^2-2e_i^2)V +e_i^2V ∂_ijV=Ve_ie_j for i≠ j. Hence, V_ij/V=e_ie_j+δ_ij(1/1+λ_i^2-2e_i^2) ≥1-λ_i^2/(1+λ_i^2)^2≥ C(η) where the last lower bound follows from using the fact that D^2u_L^∞≤ 1-η. Note that the result in <cit.> reaches a similar conclusion by considering solutions u∈ C^1,1(B_1) of the geometric equation (<ref>) but under the assumption that there exists a c(n)>0 for which u_C^1,1(B_1)≤ c(n). §.§ A priori W^2,∞ assumption We use the Hamiltonian stationary equation to demonstrate the importance of the a priori regularity assumption u∈ W^2,∞(B_1), and show that it cannot easily be weakened, without making additional assumptions on u. Let u be a solution of (<ref>). It is currently known that (<ref>) is equivalent to (<ref>), if the C^1,1 norm of u is sufficiently small, <cit.>. This in itself demonstrates the importance of the W^2,∞ assumption in our results, since equation (<ref>) has a more favorable structure than (<ref>), in the sense that the fourth order operator in (<ref>) factors into two second order operators. In addition, the W^2,∞-regularity is necessary to make sense of the additional requirement D^2 u_L^∞≤ 1-η, which is the sufficient condition demonstrated here for the uniform convexity of the functional (<ref>). On the other hand, if one directly assumes that u is a weak solution of the Hamiltonian stationary equation (<ref>) in B_1, then using integration by parts, for any test function η∈ C^∞_0(B_1), we may write: ∫_B_1ΘΔ_gη dμ_g=0. Recall that the Laplace-Beltrami operator of the metric g on L_u is given by: Δ_g =1/√( g)∂_i(√( g)g^ij∂_j ) =g^ij∂_ij+1/√(g)∂_i(√(g)g^ij)∂_j =g^ij∂_ij-g^jpΘ_q u_pq∂_j. We re-write the distributional equation as ∫_B_1Θ(g^ij∂_ijη-g^jpΘ_q u_pq∂_jη)dμ_g=0 . For the above equation to be well-defined for u∈ W^2,p(B_1), with 1≤ p < ∞, we require Θ∈ W^1,p'(B_1), where p' is the Hölder dual of p, satisfying 1/p+1/p'=1. However, Θ∈ W^1,p'(B_1) requires u to necessarily have more a priori regularity than merely W^2,p. It may not be necessary to require that u∈ W^3,p'(B_1) a priori, but this nevertheless imposes a nonlinear anisotropic third order integrability constraint on u. It remains possible that when considering a critical point of (<ref>), one may relax the assumption of W^2,∞ to W^2,2, for example. This, however, would require a weaker sufficient condition than the hypothesis D^2 u_L^∞≤ 1-η given in Corollary <ref>, in order to deduce uniform convexity of the functional (<ref>). However, by comparison to the classical area functional, for which there exist critical points that are not smooth in sufficiently high dimensions, a uniform bound on the Hessian appears to be a reasonable condition to impose to guarantee uniform convexity, without which one does not typically expect regularity for critical points. amsalpha
http://arxiv.org/abs/2307.01095v1
20230703151642
Coded Orthogonal Modulation for the Multi-Antenna Multiple-Access Channel
[ "Alexander Fengler", "Alejandro Lancho", "Yury Polyanskiy" ]
cs.IT
[ "cs.IT", "math.IT" ]
citations citations_alex * § 15pt1.20.9 remark theoremTheorem corollaryCorollary remarkRemark lemmaLemma exampleExample problemProblem conjectureConjecture #1YP: #1 #1
http://arxiv.org/abs/2307.02089v1
20230705075733
Imaging of high-frequency electromagnetic field by multipulse sensing using nitrogen vacancy centers in diamond
[ "Shintaro Nomura", "Hideyuki Watanabe", "Satoshi Kashiwaya" ]
quant-ph
[ "quant-ph" ]
APS/123-QED Division of Physics, Univ. of Tsukuba, 1-1-1 Tennodai, Tsukuba, Ibaraki, 305-8571, Japan nomura.shintaro.ge@u.tsukuba.ac.jp Division of Physics, Univ. of Tsukuba, 1-1-1 Tennodai, Tsukuba, Ibaraki, 305-8571, Japan National Institute of Advanced Industrial Science and Technology (AIST) Central2, Umezono, Tsukuba, Ibaraki, 305-8568, Japan Department of Applied Physics, Nagoya Univ. Chikusa-Ku, Nagoya, Aichi, 464-8571, Japan Near-field enhancement of the microwave field is applied for imaging high frequency radio field using a diamond chip with an n-doped isotopically purified diamond layer grown by microwave plasma assisted chemical vapor deposition. A short π pulse length enables us to utilize a multipulse dynamic decoupling method for detection of radio frequency field at 19.23 MHz. An extraordinary frequency resolution of the external magnetic field detection is achieved by using amplitude-shaped control pulses. Our method opens up the possibility for high-frequency-resolution RF imaging at μm spatial resolution using nitrogen vacancy centers in diamond. Imaging of high-frequency electromagnetic field by multipulse sensing using nitrogen vacancy centers in diamond Satoshi Kashiwaya August 1, 2023 =============================================================================================================== § INTRODUCTION Electromagnetic field imaging has been widely utilized in material characterizations, characterization of radio frequency (RF) devices, and clinical applications <cit.>. Antennas and coils are commonly used to detect RF band electromagnetic waves. A spatial resolution has been limited by sensor size, except for imaging methods using strong gradient magnetic fields and Fourier transform in magnetic resonance imaging. A diamond nitrogen vacancy (NV) center, which consists of a nitrogen impurity and an adjacent vacancy with a N-V distance of 0.169 nm in diamond, is used as a very small sensor. Since RF sensors using NV centers enable potentially high spatial resolution imaging, they have been actively studied recently. <cit.> By using diamond NV centers, the external electromagnetic field distribution is transferred to NV electron spin states using quantum sensing techniques. Then NV electron spin states are read from photoluminescence from NV centers. A high-throughput method using optical microscopy is widely used to obtain RF field distribution, whose spatial resolution is determined by the optical diffraction limit. Methods to achieve high sensitivity have been studied so far by decoupling from environmental noise. Two methods have been applied to high-sensitivity imaging of radio-frequency electromagnetic fields. The first method is to detect RF field through spin locking <cit.> In this method, the level-splitting by the dressed state under intense resonant microwave driving is utilized to detect RF fields in high sensitivity. The splitting energy is set by the Rabi frequency of the microwave driving field (Ω). The range of the RF frequency to be detected is set by the Rabi frequency of the microwave field. High Rabi frequency up to about 1 GHz is available in diamond NV centers  <cit.>. Thus the spin-locking method may potentially be applied to detect electromagnetic fields in the wide range of 1 MHz – 1 GHz. A drawback of this method is that the spectral resolution of the RF field detection is ultimately limited by the T_1 time of NV centers, which is several ms at room temperature. The second method is to utilize dynamic decoupling to reduce disturbance from noise in the environment. By applying many π pulses at an interval of τ, the signal at the frequency f_0 = 1/2τ is selectively detected, while noise away from f_0 is filtered out. Frequency selectivity is controlled by the number of π pulses. CPMG and XY8 methods are commonly used for dynamic decoupling. <cit.> The pulse sequence of the XY8 method is designed to compensate pulse errors, which is a desirable feature for imaging. The range of the RF frequency to be detected is set by the pulse interval of τ, and hence, is limited by the π pulse length. For the detection of higher frequency electromagnetic fields, the spin-locking method is more suitable. Application of the dynamic decoupling method to high RF frequency remains few. <cit.> High frequency ac field detection at the frequency close to the driven transition ω_0 has been demonstrated. <cit.> This method up-converts the detection frequency to ω = ω_0 ±Ω for spin locking and ω = ω_0 ±π/τ for dynamic decoupling, <cit.> and enables detection of GHz microwave at high sensitivity. A method is also developed to use two driving fields of the Rabi frequencies Ω_1 and Ω_2 to detect signal frequency ω = ω_0 + Ω_1 + Ω_2/2. <cit.> Nevertheless, the range of the detection frequency is limited by the amplitude of the driving field. For example, there remains a frequency region, Ω < ω < ω_0 - Ω or π/τ < ω < ω_0 - π/τ, inaccessible by the method by Joas et al. <cit.> Filling this gap is important in some applications where the driven frequency ω_0 cannot be tuned independent of the detection frequency ω. One such important application is NMR spectroscopy, where the ^1H NMR frequency is set by the external bias magnetic field and hence, by ω_0. For example, the bias magnetic field is set to 0.5 T to detect 21 MHz ^1H NMR resonance at ω_0/(2π) = 11 GHz, where the above methods fail to be applied unless sufficiently large Ω or short τ is prepared. Consequently, an extension of Ω and π/τ is highly desired. We previously showed that the Rabi frequency over a thin metal wire structure was locally enhanced compared to the Rabi frequency away from the metal wire due to the near-field enhancement of the microwave field in the vicinity of a thin wire. <cit.> This method enables local driving of the electron spins of NV centers and thus the application of microwave pulses with short π pulse length with minimum unwanted heat generation by the microwave field. In this pper, we utilize the near-field enhancement of the miacrowave field to obtain a short π pulse length using an intense microwave field of the Rabi frequency at 40 MHz, and demonstrate RF field imaging at 19.23 MHz using the XY8 method, thus extending the detection range of the dynamical decoupling method to higher frequency. § EXPERIMENTAL A 300 nm-thick ^15N-doped, isotopically purified diamond ^12C layer was grown on a (100)-oriented ultra-pure diamond chip (Element Six Ltd., electronic grade) by microwave plasma assisted chemical vapor deposition (CVD) <cit.> from isotopically enriched ^12CH_4 (99.999% for ^12C), H_2 and isotopically enriched ^15N_2 mixed gas. The diamond chip has dimensions of 2.0 × 2.0 mm^2 and a thickness of 0.5 mm. Nitrogen gas was introduced to intentionally create NV centers. After the deposition of a ^15N-doped diamond layer, He^+ ions were implanted to enhance NV^- conversion. Following the ion implantation step, the diamond chip was annealed in Ar/H_2 gas at 1000^∘C for 1.5 h, followed by annealing in air at 450^∘C for 24 h. <cit.> A Ti/Au wire with a width of 10 μm as schematically shown in Fig. 1(a) was prepared by photolithography on a Si chip. Both ends of the wire were connected to microstrip lines with impedance matched to 50 Ω. A sinusoidal ac test signal generated by a function generator (SDG2082X, Siglent) was fed to one end and the other end was terminated by a 51 Ω chip resistor. The baseband I/Q signals were generated by an arbitrary wave generator (33622A, Keysight) with a bandwidth of 120 MHz and a sampling frequency of 1 GS/s. The baseband I and Q pulses were mixed with a microwave from a local oscillator (SMC100A, Rhodes-Schwarz) with a double-balanced mixer (IQ-1545, Marki). The amplified microwave pulses were fed to a microwave planar ring antenna <cit.> with a center hole of 1 mm diameter placed above the diamond chip. NV centers were excited by a pulse laser diode at the wavelength of 520 nm. The photoluminescence from NV centers was imaged by a home-made wide-field microscope equipped with a scientific CMOS camera (Zyla5.5, Andor) and an objective lens 100× with an NA 0.73. A pulse sequencer (Pulse Blaster ESR Pro, Spincore) controls the timings of the laser diode, the scientific CMOS camera, the arbitrary wave generator, and the RF function generator. A static magnetic field was applied by a pair of permanent magnets parallel to the [111] direction. More details of the experiment can be found elsewhere <cit.>. Cosine-square profile-shaped microwave pulses <cit.> were used in the measurements for the XY8 pulse sequence (Fig. 1(b)). The introduction of the shaped pulse has two advantages: first, the pulse width and pulse interval can be changed in steps much smaller than the sampling rate of an arbitrary wave generator. The high vertical resolution of the arbitrary wave generator enables us to interpolate the center position of a control microwave pulse with a timing resolution much smaller than the sampling rate of the arbitrary wave generator <cit.>. A time increment of less than 1 ps is possible with a 16-bit-vertical resolution and the sampling rate of 1 ns at 1 GSa/s of our setup. The second advantage is that the required bandwidth of the baseband of the pulses is kept small, and the accuracy of the spin manipulation increases. In the case of pulse control using a microwave switch, the control microwave pulse is rectangular that occupies a wide bandwidth. The second advantage is particularly important in using short control microwave pulses at a limited bandwidth of a setup. § RESULTS Figure 1(c) shows an optically detected magnetic resonance (ODMR) spectrum of the ^15N-doped diamond chip. Two structures with an energy splitting of 3.0 MHz are well-separated due to the hyperfine coupling to the ^15N nuclear spin. The full widths at half maximum of the fitted Lorentz line profiles are 0.31 and 0.34 MHz at 2.7556 and 2.7586 GHz, respectively. A Hahn-echo decay profile of the ^15N-doped diamond chip is shown in Fig. 2(a). The obtained decay profile fits well by a double exponential function with T_2 fast = 33 μs and T_2 slow = 77 μs. Here, we note that the Hahn-echo signal decay profiles of the as-grown diamond sample and the sample after He ion injection were fitted well to a single exponential function with T_2 = 35 μs and 18 μs, respectively, which excludes the possibility of the inhomogeneous N dopant distribution as a source of the double exponential decay profile in Fig. 2(a). An NVH^- center is typically observed in the CVD diamond chips, but the contribution of the NVH^- center to T_2 was found to be significantly smaller than that of N^0s, the NV^0 center, or the NV^- center. <cit.> It was also found that the concentration of the NVH^- center before and after 1000^∘C annealing was nearly the same. <cit.> The NV density was estimated from the NV-NV spin interactions probed by instantaneous diffusion. The Hahn-echo signal is given by S( n_NV, θ, τ)∝ exp [-A/4γ_NV^2 n_NV( sin^2(θ/2)) τ] where n_NV is the density of the NV centers and and θ is the flip angle of the central pulse of the Hahn-echo sequence. <cit.>. This model is valid in the regime where the NV-NV interaction dominates spin decoherence. The obtained Hahn-echo signals as a function of sin^2(θ/2) are shown in Fig. 2(b) for the fast and slow components of the decay. The slow component was found to be proportional to sin^2(θ/2) and we obtained an NV density of 0.05 ppm. A linear relationship to sin^2(θ/2) was not obtained for the fast component. This indicates that the initial fast decay time is not explained by the NV-NV dipolar interactions, and hence is probably due to other extrinsic factors. In single NVs, it is generally known that the Hahn-echo signal fits well with a single exponential function. In ensemble NVs, variations in T_2 were observed due to variations in the couplings to the spin bath <cit.>. The initial fast decay is probably because the average behavior of many NVs is detected. The long T_2 after annealing at 1000 ^∘C was previously attributed essentially due to a reduction of paramagnetic vacancy clusters near NV^-. <cit.> We suspect that inhomogeneity in the distribution of the paramagnetic vacancy cluster is a possible source of the double exponential decay profile. Fast decay depending on extrinsic effects was observed in an NMR measurement <cit.> Figure 3(a) shows a spectrum of a sinusoidal ac test signal at f_s = 19.23 MHz fed to the Ti/Au metal wire structure obtained in the area of 4 μm^2 as schematically shown in Fig. 1(a) to sample the local maxima of the XY8 signals produced by the current flowing in the Ti/Au wire. The difference between the signals with and without the ac test signal is plotted. The XY8-16 sequence was used with the π pulse length of 12.5 ns. A peak due to the ac test signal was observed at τ = 1/(2f_s) = 26 ns. The time resolution was set to be 100 ps, giving 74 kHz frequency resolution. This may be compared with the frequency resolution of 2.5 MHz in the case where rectangular pulses with a time resolution of 2 ns directly from the pulse sequencer were used instead of the shaped pulses. The detected RF frequency was higher than the previous result of 9.746 969 MHz <cit.> due to the shorter π pulse length. We repeated the measurements 10 times to obtain the XY8-16 signals similar to Fig. 3. The standard deviation (σ) at the peak position τ = 26.0 ns calculated from ten traces was 50 nT. An image map of the RF amplitude distribution produced by the ac current in the Ti/Au wire structure is shown in Fig. 3(b) at τ = 26.0 ns with a pixel size of about 1 μm × 1 μm. The RF magnetic field projected to the [111] direction is detected by the XY8 protocol we used. <cit.> The configuration of the Ti/Au wire and the crystallographic axis of the diamond chip is schematically shown in Fig. 1(a). The obtained image shows minima and maxima in the vicinity of the edges of the Ti/Au wire structure, where the ac magnetic field projected to the [111] direction is large, while the detected RF amplitude is close to zero at the center of the wire structure, where the ac magnetic field is perpendicular to the [111] direction. § DISCUSSION In this paper, we have demonstrated that RF imaging at 19.23 MHz by a multipulse dynamic decoupling method with high spectral resolution using the microwave pulses at the Rabi frequency of 40 MHz. The detected RF frequency corresponds to the ^1H NMR frequency at 0.45 T. The microwave pulses at the Rabi frequency of 95.3 MHz were previously achieved in our set-up. <cit.> The maximum frequency of the RF electromagnetic field to be detected is currently limited by the sampling frequency at 1 GS/s and the bandwidth of our arbitrary wave generator of 120 MHz. The range of the RF is expected to be extended to about a half of the available microwave Rabi frequency by using an arbitrary wave generator with higher sampling rate and wider bandwidth. The sensitivity may be further enhanced by increasing the number of π pulses and optimizing the diamond chip design. Recently, a sensing protocol has been developed where the frequency resolution of a dynamical decoupling method is only limited by the stability of the synchronization clock. <cit.> This method is suitable for applications requiring particularly high-frequency resolution and has also been applied to imaging. <cit.> This technique may allow us to detect frequency shift of ∼ppm range at several 10s of MHz region for NMR. Pulse control errors due to amplitude or frequency variations of the microwave pulses may be reduced by introducing the pulse shape optimization method <cit.>. We consider that practical high-spatial-resolution high-resolution NMR is possible by using a diamond NV center sensor. This work was partly supported by a Grant-in-Aid for Scientific Research (Nos. 21H01009 and 22K18710) from Japan Society for the Promotion of Science. 36 fxundefined [1] ifx#1 fnum [1] #1firstoftwo secondoftwo fx [1] #1firstoftwo secondoftwo noop [0]secondoftwo ref[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0] rl [1]href #1 @bib@innerbibempty [Zoughi(2000)]Zoughi00 author author R. Zoughi, @noop title Microwave non-destructive testing and evaluation (publisher Kluwer Academic Publishers, address Dordrecht, year 2000)NoStop [Chipaux et al.(2015)Chipaux, Toraille, Larat, Morvan, Pezzagna, Meijer, and Debuisschert]Chipaux15 author author M. Chipaux, author L. Toraille, author C. Larat, author L. Morvan, author S. Pezzagna, author J. Meijer, and author T. Debuisschert, title title Wide bandwidth instantaneous radio frequency spectrum analyzer based on nitrogen vacancy centers in diamond, journal journal Appl. Physs Lett. volume 107, https://doi.org/10.1063/1.4936758 10.1063/1.4936758 (year 2015), note 233502NoStop [Horsley et al.(2018)Horsley, Appel, Wolters, Achard, Tallaire, Maletinsky, and Treutlein]Horsley18 author author A. Horsley, author P. Appel, author J. Wolters, author J. Achard, author A. Tallaire, author P. Maletinsky, and author P. Treutlein, title title Microwave device characterization using a widefield diamond microscope, @noop journal journal Phys. Rev. Appl. volume 10, pages 044039 (year 2018)NoStop [Yang et al.(2018)Yang, Du, Dong, Liu, Hu, and Wang]Yang18 author author B. Yang, author G. Du, author Y. Dong, author G. Liu, author Z. Hu, and author Y. Wang, title title Non-invasive imaging method of microwave near field based on solid state quantum sensing, @noop journal journal IEEE Trans. Microwave Theory and Techniques volume 88, pages 2276 (year 2018)NoStop [Mariani et al.(2020)Mariani, Nomoto, Kashiwaya, and Nomura]Giacomo20 author author G. Mariani, author S. Nomoto, author S. Kashiwaya, and author S. Nomura, title title System for the remote control and imaging of MW fields for spin manipulation in NV centers in diamond, @noop journal journal Sci. Rep. volume 10, pages 4813 (year 2020)NoStop [Hu et al.(2019)Hu, Yang, Dong, Liu, Wang, and Du]Hu19 author author Z. Hu, author B. Yang, author M. Dong, author Y. Liu, author Y. Wang, and author G. Du, title title Optical sensing of broadband rf magnetic field using a micrometer-sized diamond, https://doi.org/10.1109/TMAG.2018.2886162 journal journal IEEE Trans. Magnetics volume 55, pages 1 (year 2019)NoStop [Mizuno et al.(2020)Mizuno, Ishiwata, Masuyama, Iwasaki, and Hatano]Mizuno20 author author K. Mizuno, author H. Ishiwata, author Y. Masuyama, author T. Iwasaki, and author M. Hatano, title title Simultaneous wide-field imaging of phase and magnitude of ac magnetic signal using diamond quantum magnetometry, https://doi.org/10.1038/s41598-020-68404-5 journal journal Sci. Rep. volume 10, pages 11611 (year 2020)NoStop [Nomura et al.(2021)Nomura, Kaida, Watanabe, and Kashiwaya]Nomura21 author author S. Nomura, author K. Kaida, author H. Watanabe, and author S. Kashiwaya, title title Near-field radio-frequency imaging by spin-locking with a nitrogen-vacancy spin sensor, journal journal J. Appl. Phys. volume 130, https://doi.org/10.1063/5.0052161 10.1063/5.0052161 (year 2021), note 024503NoStop [Loretz et al.(2013)Loretz, Rosskopf, and Degen]Loretz13 author author M. Loretz, author T. Rosskopf, and author C. L. Degen, title title Radio-frequency magnetometry using a single electron spin, @noop journal journal Phys. Rev. Lett. volume 110, pages 017602 (year 2013)NoStop [Ohashi et al.(2017)Ohashi, Rosskopf, Watanabe, Loretz, Tao, Hauert, Tomizawa, Ishikawa, Ishi-Hayase, Shikata, Degen, and Itoh]Ohashi13 author author K. Ohashi, author T. Rosskopf, author H. Watanabe, author M. Loretz, author Y. Tao, author R. Hauert, author S. Tomizawa, author T. Ishikawa, author J. Ishi-Hayase, author S. Shikata, author C. L. Degen, and author K. M. Itoh, title title Negatively charged nitrogen-vacancy centers in a 5 nm thin 12c diamond film, https://doi.org/doi: 10.1021/nl402286v journal journal Nano Lett. volume 13, pages 1530 (year 2017)NoStop [Rosskopf et al.(2014)Rosskopf, Dussaux, Ohashi, Loretz, Schirhagl, Watanabe, Shikata, Itoh, and Degen]Rosskopf14 author author T. Rosskopf, author A. Dussaux, author K. Ohashi, author M. Loretz, author R. Schirhagl, author H. Watanabe, author S. Shikata, author K. M. Itoh, and author C. L. Degen, title title Investigation of surface magnetic noise by shallow spins in diamond, @noop journal journal Phys. Rev. Lett. volume 112, pages 147602 (year 2014)NoStop [Fuchs et al.(2009)Fuchs, Dobrovitski, Toyli, Heremans, and Awschalom]Fuchs09 author author G. D. Fuchs, author V. V. Dobrovitski, author D. M. Toyli, author F. J. Heremans, and author D. D. Awschalom, title title Gigahertz dynamics of a strongly driven single quantum spin, @noop journal journal Science volume 326, pages 1520 (year 2009)NoStop [Cywi ńński et al.(2008)Cywi ńński, Lutchyn, Nave, and Das Sarma]Cywinski08 author author L. Cywi ńński, author R. M. Lutchyn, author C. P. Nave, and author S. Das Sarma, title title How to enhance dephasing time in superconducting qubits, https://doi.org/10.1103/PhysRevB.77.174509 journal journal Phys. Rev. B volume 77, pages 174509 (year 2008)NoStop [Ryan et al.(2010)Ryan, Hodges, and Cory]Ryan10 author author C. A. Ryan, author J. S. Hodges, and author D. G. Cory, title title Robust decoupling techniques to extend quantum coherence in diamond, https://doi.org/10.1103/PhysRevLett.105.200402 journal journal Phys. Rev. Lett. volume 105, pages 200402 (year 2010)NoStop [de Lange et al.(2010)de Lange, Wang, Riste, Dobrovitski, and Hanson]Lange10 author author G. de Lange, author Z. H. Wang, author D. Riste, author V. V. Dobrovitski, and author R. Hanson, title title Universal dynamical decoupling of a single solid-state spin from a spin bath, https://doi.org/10.1126/science.1192739 journal journal Science volume 330, pages 60 (year 2010)NoStop [Yuge et al.(2011)Yuge, Sasaki, and Hirayama]Yuge11 author author T. Yuge, author S. Sasaki, and author Y. Hirayama, title title Measurement of the noise spectrum using a multiple-pulse sequence, https://doi.org/10.1103/PhysRevLett.107.170504 journal journal Phys. Rev. Lett. volume 107, pages 170504 (year 2011)NoStop [Wang et al.(2012)Wang, de Lange, Ristè, Hanson, and Dobrovitski]Wang12 author author Z.-H. Wang, author G. de Lange, author D. Ristè, author R. Hanson, and author V. V. Dobrovitski, title title Comparison of dynamical decoupling protocols for a nitrogen-vacancy center in diamond, https://doi.org/10.1103/PhysRevB.85.155204 journal journal Phys. Rev. B volume 85, pages 155204 (year 2012)NoStop [Zopes et al.(2017)Zopes, Sasaki, Cujia, Boss, Chang, Segawa, Itoh, and Degen]Zopes17 author author J. Zopes, author K. Sasaki, author K. S. Cujia, author J. M. Boss, author K. Chang, author T. F. Segawa, author K. M. Itoh, and author C. L. Degen, title title High-resolution quantum sensing with shaped control pulses, @noop journal journal Phys. Rev. Lett. volume 119, pages 260501 (year 2017)NoStop [Joas et al.(2017)Joas, Waeber, Braunbeck, and Reinhard]Joas2017 author author T. Joas, author A. M. Waeber, author G. Braunbeck, and author F. Reinhard, title title Quantum sensing of weak radio-frequency signals by pulsed mollow absorption spectroscopy, @noop journal journal Nat. Commun. volume 8, pages 964 (year 2017)NoStop [Stark et al.(2017)Stark, Aharon, Unden, Louzon, Huck, Andersen, and Jelezko]Stark2017 author author A. Stark, author N. Aharon, author T. Unden, author D. Louzon, author A. Huck, Alexander Retzker, author U. L. Andersen, and author F. Jelezko, title title Narrow-bandwidth sensing of high-frequency fields with continuous dynamical decoupling, @noop journal journal Nat. Commun. volume 8, pages 1105 (year 2017)NoStop [Saijo et al.(2018)Saijo, Matsuzaki, Saito, Yamaguchi, Hanano, Watanabe, Mizuochi, and Ishi-Hayase]Saijo2018 author author S. Saijo, author Y. Matsuzaki, author S. Saito, author T. Yamaguchi, author I. Hanano, author H. Watanabe, author N. Mizuochi, and author J. Ishi-Hayase, title title AC magnetic field sensing using continuous-wave optically detected magnetic resonance of nitrogen-vacancy centers in diamond, https://doi.org/10.1063/1.5024401 journal journal Appl. Phys. Lett. volume 113, pages 082405 (year 2018)NoStop [Tashima et al.(2019)Tashima, Morishita, and Mizuochi]Tashima19 author author T. Tashima, author H. Morishita, and author N. Mizuochi, title title Experimental demonstration of two-photon magnetic resonances in a single-spin system of a solid, https://doi.org/10.1103/PhysRevA.100.023801 journal journal Phys. Rev. A volume 100, pages 023801 (year 2019)NoStop [Ishikawa et al.(2012)Ishikawa, Fu, Santori, Acosta, Beausoleil, Watanabe, Shikata, and Itoh]Ishikawa12 author author T. Ishikawa, author K.-M. C. Fu, author C. Santori, author V. M. Acosta, author R. G. Beausoleil, author H. Watanabe, author S. Shikata, and author K. M. Itoh, title title Optical and spin coherence properties of nitrogen-vacancy centers placed in a 100 nm thick isotopically purified diamond layer, https://doi.org/10.1021/nl300350r journal journal Nano Letters volume 12, pages 2083 (year 2012)NoStop [Kazi et al.(2021)Kazi, Shelby, Watanabe, Itoh, Shutthanandan, Wiggins, and Fu]Zeeshawn21 author author Z. Kazi, author I. M. Shelby, author H. Watanabe, author K. M. Itoh, author V. Shutthanandan, author P. A. Wiggins, and author K.-M. C. Fu, title title Wide-field dynamic magnetic microscopy using double-double quantum driving of a diamond defect ensemble, https://doi.org/10.1103/PhysRevApplied.15.054032 journal journal Phys. Rev. Appl. volume 15, pages 054032 (year 2021)NoStop [Yamamoto et al.(2013)Yamamoto, Umeda, Watanabe, Onoda, Markham, Twitchen, Naydenov, McGuinness, Teraji, Koizumi, Dolde, Fedder, Honert, Wrachtrup, Ohshima, Jelezko, and Isoya]Yamamoto13 author author T. Yamamoto, author T. Umeda, author K. Watanabe, author S. Onoda, author M. L. Markham, author D. J. Twitchen, author B. Naydenov, author L. P. McGuinness, author T. Teraji, author S. Koizumi, author F. Dolde, author H. Fedder, author J. Honert, author J. Wrachtrup, author T. Ohshima, author F. Jelezko, and author J. Isoya, title title Extending spin coherence times of diamond qubits by high-temperature annealing, https://doi.org/10.1103/PhysRevB.88.075206 journal journal Phys. Rev. B volume 88, pages 075206 (year 2013)NoStop [Sasaki et al.(2016)Sasaki, Monnai, Saijo, Fujita, Watanabe, Ishi-Hayase, Itoh, and Abe]Sasaki16 author author K. Sasaki, author Y. Monnai, author S. Saijo, author R. Fujita, author H. Watanabe, author J. Ishi-Hayase, author K. M. Itoh, and author E. Abe, title title Broadband, large-area microwave antenna for optically detected magnetic resonance of nitrogen-vacancy centers in diamond, @noop journal journal Rev. Sci. Instrum. volume 87, pages 053904 (year 2016)NoStop [Nomura(2021)]SpringerHybrid author author S. Nomura, title Hybrid quantum systems (publisher Springer Nature, year 2021) Chap. chapter 2, p. pages 27NoStop [Shinei et al.(2022)Shinei, Masuyama, Miyakawa, Abe, Ishii, Saiki, Onoda, Taniguchi, Ohshima, and Teraji]Chikara22 author author C. Shinei, author Y. Masuyama, author M. Miyakawa, author H. Abe, author S. Ishii, author S. Saiki, author S. Onoda, author T. Taniguchi, author T. Ohshima, and author T. Teraji, title title Nitrogen related paramagnetic defects: Decoherence source of ensemble of NV- center, @noop journal journal J. of Appl. Phys. volume 132, pages 214402 (year 2022)NoStop [Eichhorn et al.(2019)Eichhorn, McLellan, and Bleszynski Jayich]Eichhorn19 author author T. R. Eichhorn, author C. A. McLellan, and author A. C. Bleszynski Jayich, title title Optimizing the formation of depth-confined nitrogen vacancy center spin ensembles in diamond for quantum sensing, https://doi.org/10.1103/PhysRevMaterials.3.113802 journal journal Phys. Rev. Mater. volume 3, pages 113802 (year 2019)NoStop [Bauch et al.(2020)Bauch, Singh, Lee, Hart, Schloss, Turner, Barry, Pham, Bar-Gill, Yelin, and Walsworth]Bauch20 author author E. Bauch, author S. Singh, author J. Lee, author C. A. Hart, author J. M. Schloss, author M. J. Turner, author J. F. Barry, author L. M. Pham, author N. Bar-Gill, author S. F. Yelin, and author R. L. Walsworth, title title Decoherence of ensembles of nitrogen-vacancy centers in diamond, https://doi.org/10.1103/PhysRevB.102.134210 journal journal Phys. Rev. B volume 102, pages 134210 (year 2020)NoStop [Sasaki et al.(2020)Sasaki, Miura, Ikeda, Sakai, Sekikawa, Saito, Yuge, and Hirayama]Sasaki20 author author S. Sasaki, author T. Miura, author K. Ikeda, author M. Sakai, author T. Sekikawa, author M. Saito, author T. Yuge, and author Y. Hirayama, title title 1/f^2 spectra of decoherence noise on ^75As nuclear spins in bulk GaAs, @noop journal journal Sci. Rep. volume 10, pages 10674 (year 2020)NoStop [Childress and McIntyre(2010)]Childress2010 author author L. Childress and author J. McIntyre, title title Multifrequency spin resonance in diamond, https://doi.org/10.1103/PhysRevA.82.033839 journal journal Phys. Rev. A volume 82, pages 033839 (year 2010)NoStop [Schmitt et al.(2017)Schmitt, Gefen, Stürner, Unden, Wolff, Müller, Scheuer, Naydenov, Markham, Pezzagna, Meijer, Schwarz, Plenio, Retzker, McGuinness, and Jelezko]Schmitt17 author author S. Schmitt, author T. Gefen, author F. M. Stürner, author T. Unden, author G. Wolff, author C. Müller, author J. Scheuer, author B. Naydenov, author M. Markham, author S. Pezzagna, author J. Meijer, author I. Schwarz, author M. Plenio, author A. Retzker, author L. P. McGuinness, and author F. Jelezko, title title Submillihertz magnetic spectroscopy performed with a nanoscale quantum sensor, https://doi.org/10.1126/science.aam5532 journal journal Science volume 356, pages 832 (year 2017)NoStop [Boss et al.(2017)Boss, Cujia, Zopes, and Degen]Boss17 author author J. M. Boss, author K. S. Cujia, author J. Zopes, and author C. L. Degen, title title Quantum sensing with arbitrary frequency resolution, https://doi.org/10.1126/science.aam7009 journal journal Science volume 356, pages 837 (year 2017)NoStop [Glenn et al.(2018)Glenn, Bucher, Lee, Lukin, Park, and Walsworth]Glenn2018 author author D. R. Glenn, author D. B. Bucher, author J. Lee, author M. D. Lukin, author H. Park, and author R. L. Walsworth, title title High-resolution nuclear magnetic resonance spectroscopy at the scale of single cells is achieved by combining a magnetometer consisting of an ensemble of nitrogen-vacancy centres with a narrowband synchronized readout protocol, https://doi.org/10.1038/nature25781 journal journal Nature volume 555, pages 351 (year 2018)NoStop [Rembold et al.(2020)Rembold, Oshnik, Muller, Montangero, Calarco, and Neu]Rembold20 author author P. Rembold, author N. Oshnik, author M. M. Muller, author S. Montangero, author T. Calarco, and author E. Neu, title title Introduction to quantum optimal control for quantum sensing with nitrogen-vacancy centers in diamond, journal journal AVS Quantum Science volume 2, https://doi.org/10.1116/5.0006785 10.1116/5.0006785 (year 2020), note 024701NoStop
http://arxiv.org/abs/2307.01769v1
20230704151942
Infinite-thin shock layer solutions for stationary compressible conical flows and numerical results via Fourier spectral method
[ "Aifang Qu", "Xueying Su", "Hairong Yuan" ]
math.AP
[ "math.AP", "math-ph", "math.MP", "physics.flu-dyn" ]
[A. Qu]Department of Mathematics, Shanghai Normal University, Shanghai, 200234, China afqu@shnu.edu.cn [X. Su]Center for Partial Differential Equations, School of Mathematical Sciences, East China Normal University, Shanghai 200241, China suxueying789@163.com [H. Yuan]School of Mathematical Sciences and Shanghai Key Laboratory of Pure Mathematics and Mathematical Practice, East China Normal University, Shanghai 200241, China hryuan@math.ecnu.edu.cn Infinite-thin shock layer solutions for stationary compressible conical flows and numerical results via Fourier spectral method Hairong Yuan August 1, 2023 =============================================================================================================================== We consider the problem of uniform steady supersonic Euler flows passing a straight conical body with attack angles, and study Radon measure solutions describing the infinite-thin shock layers, particularly for the Chaplygin gas and limiting hypersonic flows. As a byproduct, we obtain the generalized Newton-Busemann pressure laws. To construct the Radon measure solutions containing weighted Dirac measures supported on the edge of the cone on the 2-sphere, we derive some highly singular and non-linear ordinary differential equations (ODE). A numerical algorithm based on the combination of Fourier spectral method and Newton's method is developed to solve the physically desired nonnegative and periodic solutions of the ODE. The numerical simulations for different attack angles exhibit proper theoretical properties and excellent accuracy, thus would be useful for engineering of hypersonic aerodynamics. § INTRODUCTION Supersonic conical flow is an important prototypical problem in mathematical gas dynamics due to its wide applications and tremendous challenges. It is observed that if the Mach number of the upcoming flow is large, shock layer (i.e., the region between the shock-front and the cone) will be extremely thin, and density would blow up to infinity. Qu and Yuan <cit.> studied supersonic flow of polytropic gas passing a straight cone. For the particular case without attack angle, they constructed a Radon measure solution to the hypersonic-limit problem, with density containing a Dirac measure supported on the surface of the cone, and then proved the celebrated Newton sine-squared law of hypersonic aerodynamics. For the general case, it remains open even for finding a numerical solution, since the resultant ODE are quite singular and highly nonlinear, while one needs to establish nonnegative periodic solutions. In this work, we consider infinite-thin shock layer solutions to steady compressible flow passing the cone with attack angles, especially for Chaplygin gas, and then propose a numerical method to solve the derived ODE via Fourier spectral method. For more background and introductions on conical flows, we refer to <cit.> and references therein. In the rest of this section, we explain the notations, formulate the problem of supersonic conical flows, and present the concept of Radon measure solution to the problem. Section <ref> is devoted to deriving the ODE governing the weights of the Dirac measures, which describe the strength of the infinite-thin shock layers. In Section <ref> we consider the special case of Chaplygin gas. In the final Section <ref>, we exhibit the Fourier spectral method of solving the nonnegative periodic solutions and present many numerical results demonstrating its efficiency. §.§ Equations of conical flow In this paper, we adopt the sphere coordinates as used in <cit.>, see Figure <ref>. The sphere coordinates of the standard Euclidean space ℝ^3 (x=(x^1,x^2,x^3)) is (θ,ϕ,r), given by x^1=rcosθ, x^2=rsinθcosϕ, x^3=rsinθsinϕ, θ∈[0,π], ϕ∈[-π,π]. The natural frame under sphere coordinates is denoted as (∂_θ,∂_ϕ,∂_r): ∂_θ=∂ x/∂θ, ∂_ϕ=∂ x/∂ϕ, ∂_r=∂ x/∂ r. The velocity field V∈ℝ^3 of the flow is written as V=u+w∂⃗_⃗r⃗=u^θ∂⃗_⃗θ⃗+u^ϕ∂⃗_⃗ϕ⃗+w∂⃗_⃗r⃗, with u^θ, u^ϕ, w the component along ∂_θ, ∂_ϕ and ∂_r respectively. Thus, the tangential component of the velocity is u=u^θ∂⃗_⃗θ⃗+u^ϕ∂⃗_⃗ϕ⃗, and w∂_r is the radial component. We set ρ be the density of mass of the gas, E the total ethalpy per unit mass of gas, and p the pressure. We also record here the divergence and gradient operator in ℝ^3 and on the unit sphere S^2⊂ℝ^3 under sphere coordinates as follows: Div V=1/√(G)(∂_θ(√(G) u^θ)+∂_ϕ(√(G) u^ϕ)+∂_r(√(G) w)), √(G)≐ r^2sinθ, div u=1/√(g)(∂_θ(√(g)u^θ)+∂_ϕ(√(g)u^ϕ), √(g)≐sinθ, Grad p=1/r^2∂_θ p·∂_θ+sin^2θ/r^2∂_ϕ p ·∂_ϕ+∂_rp·∂_r, grad p= ∂_θ p·∂_θ+sin^2θ∂_ϕ p ·∂_ϕ. Suppose that there is an infinite straight cone ℭ in ℝ^3 whose vertex is located at the original point O. The cone is symmetric with respect to the x^1-axis and its semi-vertex angle is θ_0. Uniform supersonic gas from the left half-space flow towards the cone with attack angle α_0, and the flow outside the cone is assumed to be governed by the non-isentropic compressible Euler system Div(ρ V)=0, Div(ρ V⊗ V)+Grad p=0, Div(ρ EV)=0, which represents the conservation of mass, momentum and energy respectively. State equation of the gas is given by p=p(ρ, E). After dimensionless scalings (cf. <cit.>), we may assume that the upcoming flow is U_0=(ρ_0=1,V_0∈ S^2, E_0), E_0≥12, and (<ref>) becomes p_0=p(ρ_0, E_0). On the surface of the cone, we prescribe the slip condition (V,N)=0, with N the unit outward normal vector on the surface of the cone. Given that the above problem is invariant under scaling x↦α x, ∀α>0, we may consider that the flow is defined on Ω≐ S^2/Con, where Con is the common part of the cone and S^2. Let C≐∂Ω be the edge of Ω, and 𝐧 the unit outward normal vector of Ω along C. By direct deduction with differential geometry as in <cit.>, which we omit here because of the similarity, one has the following compressible Euler system of conical flows on Ω div(ρ u)+2ρ w=0, div(ρ u E)+2ρ wE=0, div(ρ wu)+2ρ w^2-ρ|u|^2=0, div(ρ u⊗ u)+3ρ wu+grad p=0, which represents the conservation of mass, energy, radial momentum, and tangential momentum respectively. Slip condition (<ref>) then reduces to (u,𝐧)=0 on C. Our goal is to study the following problem. Problem A: find a solution to (<ref>)-(<ref>), (<ref>), (<ref>), and (<ref>). §.§ Radon measure solutions When Mach number of the upcoming flow is sufficiently large, it is observed that shock layer will be extremely thin and mass will concentrate on the surface of the cone, cf. <cit.>. Hence we consider a solution to Problem A in the class of Radon measures. Let m be a Radon measure on S^2. The pairing between m and a test function ψ(θ,ϕ)∈ C(S^2), the set of continuous functions on S^2, is given by ⟨ m, ψ⟩=∫_S^2ψ(θ,ϕ) dm(θ,ϕ). For example, let C(s) be a Lipschitz curve on S^2, with arc-length parameter s∈ [0,L].  Then W(s)δ_R, the Dirac measure supported on C with weight W(s)∈ L^1([0,L]), is defined by ⟨ W(s)δ_C, ψ⟩=∫_0^L W(s) ψ|_C ds. We also denote ℋ^2 as the standard Hausdorff measure on S^2. Let m_a,  n_a, m_e, n_e, m_r,  n_r,  m_t,  n_t, ϱ, ℘ be Radon measures on Ω, and W_C(s) a positive integrable function for s∈[0,L]. Suppose that (1) for any ψ∈ C^1(S^2) and C^1 vector field v, there hold ⟨ m_a,∇ψ⟩=2⟨ n_a,∇ψ⟩, ⟨ m_e,∇ψ⟩=2⟨ n_e,∇ψ⟩, ⟨ m_r,∇ψ⟩=2⟨ n_r,∇ψ⟩-⟨ n_t,∇ψ⟩, ⟨ m_t,Dv⟩+⟨℘,div v⟩=3⟨ m_r,v⟩+⟨ W_C𝐧δ_C,v⟩; (2) ϱ, ℘ are non-negative measures, m_a, m_r, m_e, n_e, m_t, n_a, n_r, n_t, ℘ are absolute-continuous with respect to ϱ, and there also exist ϱ-a.e. functions w, E and vector field u such that the Radon-Nikodym derivatives satisfy w=dm_r/dϱ/dm_a/dϱ=dn_r/dϱ/dn_a/dϱ=dn_a/dϱ, E=dm_e/dϱ/dm_a/dϱ=dn_e/dϱ/dn_a/dϱ, u=dm_a/dϱ,  |u|^2=dn_t/dϱ,  u⊗ u=dm_t/dϱ; (3) if ϱ, ℘≪ℋ^2, and their Radon-Nikodym derivatives are ρ=dϱ/dℋ^2, p=d℘/dℋ^2, then (<ref>) holds ℋ^2-a.e., and classical entropy condition is valid for discontinuities of functions ρ, u, w, E in this case. Then we call (ϱ, u, w, E) a Radon measure solution to Problem A. We remark that W_c𝐧 represents the (scaled) force of lift/drag acting by the flow on the conical body. From mathematical point of view, the above definition works for gas with general state function (<ref>), not necessarily restricted to the special Chaplygin gas or limiting hypersonic flows (pressureless Euler flows). § RADON MEASURE SOLUTIONS OF INFINITE-THIN SHOCK LAYERS We noticed that in the hypersonic aerodynamics, it is known that if the Mach number of the upcoming flow is large enough, the shock layer would be quite thin and mass concentrate on the surface of the cone. We wish to construct Radon measure solutions with such structure, i.e., assuming that a Radon measure solution to Problem A is given by m_a=ρ_0u_0𝕀_Ωdℋ^2+W_a(s)δ_C,             n_a=ρ_0w_0𝕀_Ωdℋ^2+w_a(s)δ_C, m_e=ρ_0u_0E_0𝕀_Ωdℋ^2+W_e(s)δ_C,             n_e=ρ_0w_0E_0𝕀_Ωdℋ^2+w_e(s)δ_C, m_r=ρ_0u_0w_0𝕀_Ωdℋ^2+W_r(s)δ_C,          n_r=ρ_0w_0^2𝕀_Ωdℋ^2+w_e(s)δ_C, m_t=ρ_0u_0⊗ρ_0𝕀_Ωdℋ^2+W_t(s)δ_C,        n_t=ρ_0|u_0|^2𝕀_Ωdℋ^2+w_t(s)δ_C, ϱ=ρ_0𝕀_Ωdℋ^2+w_ρδ_C,                       ℘=p(ρ_0, E_0)dℋ^2. In the above, we have set 𝕀_Ω(θ,ϕ) to be the indicator function of Ω, i.e., 𝕀_Ω=1 if (θ,ϕ)∈Ω, and 𝕀_Ω=0 otherwise. However, observing that we could always construct Radon measure solutions with such structure mathematically, even if the Mach number of the upcoming supersonic flow is not large. (The uniqueness of solutions is a quite delicate issue that we would not touch in this work.) In the following, we denote 𝐭(s) and 𝐧(s) as the unit tangential and normal vector of Ω along C respectively. As in <cit.>, substituting (<ref>) into (<ref>), and supposing W_a(s)=W_a^n(s)𝐧(s)+W_a^t(s)𝐭(s), one has W_a^n(s)=0, (W_a^t(s))'+2w_a(s)=ρ_0(u,𝐧). Similarly, substituting (<ref>) into (<ref>), one comes to W_e^n(s)=0, (W_e^t(s))'+2w_e(s)=ρ_0E_0(u,𝐧), where W_e(s)=W_e^n(s)𝐧(s)+W_e^t(s)𝐭(s). Substituting (<ref>) into (<ref>), while by denoting W_r(s)=W_r^n(s)𝐧(s)+W_r^t(s)𝐭(s), one has W_r^n(s)=0, (W_r^t(s))'+2w_r(s)-w_t(s)=ρ_0w_0(u,𝐧). Substituting (<ref>) and (<ref>) into (<ref>), and with the decomposition W_t = W_t^nn𝐧⊗𝐧 + W_t^nt𝐧⊗𝐭 + W_t^tn𝐭⊗𝐧 + W_t^tt𝐭⊗𝐭, one obtains W_t^nn=W_t^nt=W_t^tn=0, W_C = (W_t,Dn) + ρ_0(u_0, 𝐧)^2+p(ρ_0,E_0), -(W_t,Dt)+(W_t^tt)+3W_r^t = ρ_0(u_0, 𝐧)(u_0,𝐭). By Definition 1.1, the unknowns w_ρ(s), w(s), u(s) are determined by u= W_a(s)/w_ρ(s), |u(s)|^2 = w_t(s)/w_ρ(s), u(s)⊗ u(s) = W_t(s)/w_ρ(s), w = W_r(s)/.W_a(s) = w_r(s)/w_a(s) = w_a(s)/w_ρ(s). Here, /. means W_r(s) and W_a(s) are linearly dependent. From (<ref>), and writing u(s)=u^n(s)𝐧+u^t(s)𝐭, one has u^n(s)=0, u^t(s)=W_a^t(s)/w_ρ(s). From (<ref>), one gets u^t(s)^2=w_t(s)/w_ρ(s), and therefore with (<ref>), W_t^tt(s)=w_ρ(u^t)^2. Thus, (<ref>) indicates that W_r^t = w_ρ u^tw, w_a =w_ρ w, w_r = w_ρ w^2. Therefore from (<ref>), (<ref>) and (<ref>), we have the following equations (w_ρ(u^t)^2)'+3w_ρ u^tw=ρ_0(u_0, 𝐧)(u_0, t)≜ a(s), (w_ρ u^tw)'-w_ρ(u^t)^2+2w_ρ w^2=ρ_0w_0(u_0, 𝐧)≜ b(s), (w_ρ u^t)'+2w_ρ w=ρ_0(u_0,𝐧), and W_C=w_ρ(u^t)^2(𝐭,D_𝐭𝐧) + ρ_0(u_0, 𝐧)^2+p(ρ_0, E_0). Hence we have the following theorem on conical flows with infinite-thin shock layers. Suppose that w_ρ(s),u^t(s),w(s) are solutions to equations (<ref>)-(<ref>). Then Problem A admits a Radon measure solution given by ϱ=ρ_0𝕀_Ωdℋ^2+w_ρ(s)δ_C, u(s)=u_0𝕀_Ω+u^t(s)𝐭𝕀_C, w(s)=w_0𝕀_Ω+w(s)𝕀_C, E=E_0𝕀_Ω+E_0𝕀_C, provided that W_C=w_ρ(u^t)^2(𝐭,D_𝐭𝐧) + ρ_0(u_0, 𝐧)^2+p(ρ_0, E_0)>0. We now derive a second order ODE from (<ref>)-(<ref>). As in <cit.>, by setting f(s)≐ w_ρ (u^t)^2, h(s)≐ w_ρ u^tw, y(s)≐ w_ρ u^t, (<ref>)-(<ref>) read f'+3h=a(s), h'-f+2h^2/f=b(s), y'+2hy/f=ρ_0(u_0,𝐧). We notice that (<ref>) is already decoupled, while (<ref>) implies that h(s)=(a(s)-f'(s))/3. Then (<ref>) becomes ff”-2/3f'^2+4/3a(s)f'+3f^2+(3b(s)-a'(s))f=2/3a(s)^2. Changing to ϕ-variable, ϕ∈[-π,π], with ds/dϕ=sinθ_0, and by the validity of (also see <cit.>) ρ_0=1, (u_0, 𝐧)=cosα_0 sinθ_0-sinα_0 cosθ_0 cosϕ, (u_0, 𝐭)=sinα_0 sinϕ, w_0=cosα_0 cosθ_0+sinα_0 sinθ_0cosϕ, we finally come to the ODE ff̈-2/3ḟ^2+ (a_1sinϕ+ a_2sin2ϕ)ḟ+a_3f^2 +(a_4+a_5cosϕ+a_6cos2ϕ)f=3/8(a_1sinϕ+a_2sin2ϕ)^2. In the equation, ḟ represents df(ϕ)/dϕ, and a_1=-2/3sin^2θ_0sin(2α_0), a_2=1/3sin(2θ_0)sin^2α_0,  a_3=3sin^2θ_0, a_4=3/4sin^2θ_0sin(2θ_0)(3cos^2α_0-1),  a_5=1/2sin^2θ_0sin(2α_0)(1-3cos(2θ_0)), a_6=-sinθ_0cosθ_0sin^2α_0(1+2/3sin^2θ_0). Besides, if f(ϕ) is a solution to equation (<ref>), then the lift/drag on the cone is given by W_C(ϕ)=(cosα_0sinθ_0-sinα_0cosθ_0cosϕ)^2-f(ϕ)θ_0+p(ρ_0, E_0), ϕ∈[-π,π]. We call (<ref>) as the generalized Newton-Busemann pressure law for conical flow with attack angle α_0. We require that W_C(ϕ)>0 to guarantee that the mass indeed concentrates on the surface of the cone. Observing that f stands for the double of the tangential kinetic energy of the concentrated gas. Due to physical considerations and symmetry with respect to the x^1-axis, f should be a non-negative even periodic C^2 function, with period 2π, and f(-π)=f(0)=f(π)=0, ḟ(-π)=ḟ(0)=ḟ(π)=0. When α_0≠0, finding an analytical solution to (<ref>) is an open problem. It is also not straightforward to construct a numerical periodic solution, since it is non-autonomous, with non-homogeneous terms, and singular at ϕ=0, ±π, as f(ϕ) takes value 0 when ϕ=0 and ±π. Therefore, we will propose a numerical method to solve the ODE in Section <ref> (cf. <cit.>). § CHAPLYGIN CONICAL FLOWS We now consider the case when the upcoming flow is Chaplygin gas, with state function p=-A/ρ, A>0 and assuming E=E_0 being a given constant in the whole flow field. Thus, p_0=p(ρ_0, E_0)=-1/ρ_0M_∞^2, with M_∞ the Mach number of the upcoming flow. Recall that ρ_0=1 by the non-dimensional scalings. §.§ Conical Chaplygin flow without attack angle We first deal with the particular case that α_0=0. By Theorem 2.1, one reaches the following corollary. When Mach number of the upcoming Chaplygin gas M_∞>1/sinθ_0, Problem A (for Chaplygin gas) has a Radon measure solution ϱ=ρ_0𝕀_Ωdℋ^2+1/2tanθ_0δ_C, w=cosθ_0𝕀_Ω+cosθ_0𝕀_C, u=u_0𝕀_Ω, E=E_0𝕀_Ω+E_0𝕀_C, and the pressure on C is given by W_C=sin^2θ_0-1/M_∞^2. When α_0=0, by (<ref>)-(<ref>), one obtains the following equations (w_ρ(u^t)^2)'+3w_ρ u^tw=0, (w_ρ u^tw)'-w_ρ(u^t)^2+2w_ρ w^2=cosθ_0sinθ_0, (w_ρ u^t)'+2w_ρ w=sinθ_0. They yield the solution w_ρ(s)=1/2tanθ_0, u^t(s)=0, w(s)=cosθ_0. The requirement W_C=sin^2θ_0-1/ρ_0M_∞^2>0 justifies the assumption M_∞ >√(1/ρ_0sin^2θ_0)=1/sinθ_0. For limiting hypersonic flow, namely M_∞=∞, we have W_C=sin^2θ_0, which is the classical Newton sine-squared law for cones (cf. <cit.>). §.§ General case with attack angle We have the following corollary directly from Theorem 2.1. Let w_ρ(s),u^t(s),w(s) be the solutions to equations (<ref>)-(<ref>), and there also holds M_∞>sup_ϕ∈[-π,π](1/(cosα_0sinθ_0-sinα_0cosθ_0cosϕ)^2-w_ρ (u^t)^2(s(ϕ))θ_0)^1/2, with M_∞ the Mach number of the upcoming Chaplygin flow. Then Problem A admits a measure solution given by ϱ=ρ_0𝕀_Ωdℋ^2+w_ρ(s)δ_C, u(s)=u_0𝕀_Ω+u^t(s)𝐭𝕀_C, w(s)=w_0𝕀_Ω+w(s)𝕀_C, E=E_0𝕀_Ω+E_0𝕀_C. § NUMERICAL RESULTS WITH FOURIER SPECTRAL METHOD §.§ Non-linear system and spectral method Our goal is to find a C^2 periodic solution to equation (<ref>), which should be an even nonnegative function, satisfying f(0)=f(π)=0,  ḟ(0)=ḟ(π)=0 owing to physical considerations. Since the solution is an even periodic function, we use here the Fourier series of f(ϕ): f(ϕ)=∑_n=0^∞ b_ncos(nϕ), b_n∈ℝ. The first and second derivative of f(ϕ) are ḟ(ϕ)=-∑_n=1^∞ nb_nsin(nϕ), f̈(ϕ)=-∑_n=1^∞ n^2b_ncos(nϕ). Thus ḟ^2=-1/2∑_n=1^∞∑_k=1^∞ nkb_nb_k(cos(n+k)ϕ-cos(n-k)ϕ), ff̈=-1/2∑_n=0^∞∑_k=1^∞ k^2b_nb_k(cos(n+k)ϕ+cos(n-k)ϕ). Set b_-1=0. Then through direct calculations, (<ref>) becomes ∑_k=1^∞( -b_0k^2b_k+a_4b_k+1/2a_5b_k-1+1/2a_6b_k-2+1/2a_5b_k+1+1/2a_6b_k+2+2a_3b_0b_k +1/2a_1(k-1)b_k-1 +1/2a_2(k-2)b_k-2-1/2a_1(k+1)b_k+1-1/2a_2(k+2)b_k+2)cos(kϕ) +∑_n=1^∞∑_k=1^∞((1/3n-1/2k)k+1/2a_3)b_nb_kcos(n+k)ϕ_A + ∑_n=1^∞∑_k=1^∞(1/2a_3-(1/3n+1/2k)k)b_nb_kcos(n-k)ϕ_B +(3/8a_1a_2+1/2a_5b_0+1/2a_6b_1-1/2a_2b_1)cosϕ+(1/2a_6b_0+3/16a_1^2)cos2ϕ+3/8a_1a_2cos3ϕ +3/16a_2^2cos4ϕ +a_3b_0^2-1/2a_1b_1-a_2b_2+1/2a_6b_2+1/2a_5b_1+a_4b_0-3/16(a_1^2+a_2^2)=0. One also computes A and B as A =∑_m=2^∞cos(mϕ)(∑_k=1^m-1(1/3(m-k)-1/2k)k+1/2a_3)b_kb_m =∑_n=2^∞cos(nϕ)(∑_k=1^n-1(1/2a_3+k(1/3n-5/6k))b_kb_n-k) =∑_l=2^∞cos(lϕ)(∑_k=1^l-1(1/2a_3+k(1/3l-5/6k))b_kb_l-k), B =∑_k=1^∞(1/2a_3-5/6k^2)b_k^2+∑_l=1^∞∑_k=1^l(1/2a_3-(1/3(k+l)+1/2k)k)b_k+lb_k)cos(lϕ) +∑_l=1^∞∑_k=l+1^∞((1/2a_3-(1/3(k+l)+1/2k)k)b_k+lb_k+(1/2a_3-(1/3(k-l)+1/2k)k)b_k-lb_k)cos(lϕ) =∑_n=1^∞(1/2a_3-5/6n^2)b_n^2 +∑_l=1^∞(∑_k=1^∞(1/2a_3-(1/3l+5/6k)k)b_k+lb_k+∑_k=l+1^∞(1/2a_3-(5/6k-1/3l)k)b_k-lb_k)cos(lϕ). Reorganizing equation (<ref>), one has the following: A_0+∑_l=1^∞ A_lcos(lϕ)+∑_l=2^∞ B_lcos(lϕ)+C_1cosϕ+C_2cos(2ϕ)+C_3cos(3ϕ)+C_4cos(4ϕ)=0, in which A_0=∑_n=1^∞(1/2a_3-5/6n^2)b_n^2+a_3b_0^2-1/2a_1b_1-a_2b_2+1/2a_6b_2+1/2a_5b_1+a_4b_0-3/16(a_1^2+a_2^2), A_l=∑_k=1^∞(1/2a_3-(1/3l+5/6k)k)b_k+lb_k+∑_k=l+1^∞(1/2a_3-(5/6k-1/3l)k)b_k-lb_k          - b_0l^2b_l+a_4b_l+1/2a_5b_l-1+1/2a_6b_l-2+1/2a_5b_l+1+1/2a_6b_l+2+2a_3b_0b_l          +1/2a_1(l-1)b_l-1 +1/2a_2(l-2)b_l-2-1/2a_1(l+1)b_l+1-1/2a_2(l+2)b_l+2, B_l=∑_k=1^l-1(1/2a_3+(1/3l-5/6k)k)b_kb_l-k, C_1=-3/8a_1a_2+1/2a_5b_0+1/2a_6b_1-1/2a_2b_1, C_2=1/2a_6b_0+3/16a_1^2, C_3=3/8a_1a_2, C_4=3/16a_2^2. Now we do truncation for f(ϕ) with term N, i.e., we set b_N+1=b_N+2=⋯=0. Then (<ref>) yields that { A_0=0, A_1+C_1=0, A_i+B_i+C_i=0, i=2,3,4, A_j+B_j=0, j=5,…,N. . We denote this system as F(b)=(F_0(b),F_1(b),…,F_N(b))=0, b=(b_0,b_1,…,b_N). We then need to solve this non-linear system numerically. §.§ Newton's method Newton's method is a classical numerical algorithm to solve non-linear equations, say f(x)=0. It is based on Taylor's formula of the function f(x)=f(x_0)+f'(x_0)(x-x_0)+1/2f”(x_0)(x-x_0)^2+R_n(x-x_0)^2, and its idea is regarding the solution to the linear equation f(x_0)+f'(x_0)(x_1-x_0)=0, namely x_1=x_0-f(x_0)/f'(x_0) as an approximate solution to the non-linear equation. Then we repeat this procedure from point x_1 and get x_2, from x_n-1 and get x_n, until |x_n-x_n-1| is tolerably small. For the system F(x)=0, we only need to change the derivative f'(x) into the Jacobian matrix DF, DF=(∂ F_i/∂ b_j)_i,j=0,…,N, and the rest is the same. See Algorithm <ref>. §.§ Numerical results for N=5,6,7,8,9,10 In the Figures <ref> and <ref>, we show numerical results for f, f' and W_C, with the semi-vertex angle of the cone being θ_0=π/6, and various attack angles α_0 up to π6 for N=5 and 10. In each figure, the graph on the left is the numerical solution of f(ϕ) and its derivative f'(ϕ), and on the right is the pressure W_C(ϕ). We notice that as shown in (<ref>), W_C(ϕ) is influenced by the upstream data p(ρ_0, E_0). In these numerical experiments, for simplicity, we had assumed that p(ρ_0, E_0)=0 (taking hypersonic limit for the polytropic gases or Chaplygin gas, i.e., M_∞=∞). In Figures <ref> and <ref>, it is seen that when α_0 takes small values (say α_0≤π/9), f is indeed a periodic function with f≥0, f(-π)=f(0)=f(π)=0, and ḟ(-π)=ḟ(0)=ḟ(π)=0. We also see that W_C(ϕ)>0, which is a necessary condition for infinite-thin shock layer solution. However, when α_0 is large, say α_0≥π/6, mistakes occur on f, ḟ and W_C (see pictures (f) in Figures <ref> and <ref>), indicating that solutions with mass concentration on the surface is not suitable for large attack angles. We observe from these two pictures that for the same attack angle α_0, the curves for N=5 and 10 are virtually identical, implying that only tiny error exists between diverse values of N. To demonstrate this, we put the curves for N from 5 to 10 in one figure and get Figure <ref> . We see that it looks only one curve for each f,ḟ and W_C appearing in the figure, showing that the results for different N nearly coincide with each other. To further analyze the error, see Section 4.4. In Figures <ref> and <ref>, we carefully observe that the peak value of ḟ(ϕ) on [-π,-π/2] is a bit larger than that on [0, π/2]. This is because the particles on the lower semi-cone (which is the windward side) are more affected by the upcoming flow than the upper semi-cone (leeward side), so the kinetic energy changes faster. §.§ Error Estimates Figure <ref> shows the error of system (<ref>) with different truncation terms N. The error E_N(ϕ) is obtained by substituting A_0,…, A_N, B_2,…,B_N,C_1,C_2,C_3,C_4 into the left-hand side of (<ref>) for each N=5, 6, … Figure <ref> (a) (b) shows the corresponding results for N≥5, from which we observe that when N=5, the error is under 2×10^-7, while when N≥6, the error is comparatively much smaller, fluctuates four orders smaller (4×10^-11). For N≥6 and up to 10, the curves almost coincide with each other, implying that there is not much difference between each such N. In order to see whether the coupling of Fourier spectral method and Newton's method is a good treatment to system (<ref>), we introduce a concept of quasi-L^1 norm of the error, defined as the sum of max{E_N(ϕ)^+: ϕ∈[-π,π]} and max{E_N(ϕ)^-: ϕ∈[-π,π]} with different N, where E_N^+ is the positive part of E_N and E_N^- is the negative part of E_N. The result is shown in Figure <ref> (c). We see that as N takes larger value, the error becomes smaller, indicating a good astringency of our numerical method. In view of the fact that the order of error (10^-11) is sufficiently small compared to the order of f (10^-4) , we will not distinguish the value of N in the following series of numerical results. §.§ Results for various semi-vertex angles Figure <ref> shows the results for different semi-vertex angles θ_0 for one attack angle α_0 when α_0=π/36 and π/18. An obvious property of monotonicity is observed from the figure. For the same attack angle α_0, f(ϕ) takes larger value when θ_0 increases, and so is W_C(ϕ). §.§ Results for various attack angles and extremum for W_C. Figure <ref> is the results of f and W_C for fixed θ_0=π/6, and various α_0. We also acquire monotonicity for f. It is seen that f(ϕ) increases when α_0 takes larger value. For W_C, however, W_C(ϕ) is monotonically increasing with respect to α_0 when ϕ is near ±π, while it is monotonically decreasing when ϕ is near 0. To reveal the variation rule of W_C, we extract the extremum of W_C with each α_0 and get Figure <ref>. We observed that as α_0 increases, max W_C increases while min W_C decreases. It is compatible with the physical fact that as α_0 increases (in a reasonable range), windward side (ϕ=±π) bears more pressure while leeward side (ϕ=0) bears less. The blue curve in Figure <ref> is the graph of sin^2(π/6+α_0) and the red curve is sin^2(π/6-α_0), where α_0 is the variable. We see that max W_C and min W_C exactly fall on these two curves, and this is consistent with the theoretical result (<ref>). §.§ Results for u^t, w and w_ρ Using Fourier spectral method, f and ḟ is obtained numerically, and h is also known by (<ref>). Then by solving (<ref>) with, for example, Runge-Kutta method, we get the numerical results for y. Therefore, according to (<ref>), u^t, w and w_ρ can also be solved numerically (u^t=f/y,  w=h/y, w_ρ=y^2/f). In Figure <ref>, we show some results for different attack angle α_0 with semi-vertex angle θ_0=π/6. In (b) and (c), singularity arises at ϕ=0, ±π because y=0 at these three points. We attribute this error to the numerical method because w and w_ρ should be C^1 functions over [-π,  π]. Figure <ref> (a) indicates that u^t is not equal to the component of the upcoming stream along 𝐭, which, by <cit.>, is (u_0,𝐭)=sinα_0sinϕ. This is because the particles are affected not only by the cone, but also by the constant upcoming flow hitting on the cone. Figure <ref> (c) shows that w_ρ is monotone with respect to α_0 near ϕ=±π and 0 respectively. When ϕ is near ±π, w_ρ(ϕ) is monotonically increasing, since more particles (per unit square) will hit the cone as the attack angle increases. When ϕ is near 0, w_ρ(ϕ) is monotonically decreasing, because the angle between the upstream and the generatrix ϕ=0 gets smaller as α_0 increases, so there will be less particles hitting on this position. §.§ Trajectories on the cone Suppose the trajectory of particles on the cone is given by z=(θ_0, ϕ, r(ϕ)). For time variable t, there holds dz/dt=dz/dϕdϕ/dt=(dθ_0/dt,dϕ/dt, dr(ϕ)/dt)=(0, u^t, w), therefore dz/dϕ=(0, 1, w/u^t). So for some initial point z_0=(θ_0,ϕ_0,r(ϕ_0)), its trajectory on the cone is given by z(ϕ)=(θ_0, ϕ,  r(ϕ_0)+∫_ϕ_0^ϕw/u^tdϕ). In Figure <ref>, we present three trajectories (z^1 in blue, z^2 in red, z^3 in yellow) with different initial positions from the west semi-cone (half cone containing (0,0,-1)), which is z_0^1=(θ_0, -0.999π, 10), z_0^2=(θ_0, -3π/4, 10), z_0^3=(θ_0,-π/2, 10). Figure 10 (b) (c) (d) are views along x^3-axis, x^2-axis and x^1-axis respectively. We learn from the figures that the trajectories of particles with all initial positions, except for ϕ_0=-π, will gradually rotate on the cone and flow towards ϕ=0. Due to symmetry along x^1-axis, so is the east semi-cone. § ACKNOWLEDGEMENTS This work is supported by the National Natural Science Foundation of China under Grants No. 11871218, No. 12071298, and in part by Science and Technology Commission of Shanghai Municipality (No. 21JC1402500 and No. 22DZ2229014). 3 4 Anderson, John D. Jr.: Modern Compressible flow with historical perspective, Third edition, McGraw-Hill Education, 2003. 3 Chen, S. and Li, D.: Conical shock waves in supersonic flow, Journal of Differential Equations, 269 (2020) 595-611. 8 Collatz, L.: The Numerical Treatment of Differential Equations, Springer-Verlag, Berlin, 1960. 6 Courant, R. and Friedrichs, K.O.: Supersonic flow and shock waves, Interscience Publishers, 1948. 5 Coutsias, Evangelos A., Hagstrom, Thomas and Torres, David: An efficient spectral method for ordinary differential equations with rational function coefficients, Math. Comp., 65(1996), 611-635. 2 Cui, D. and Yin, H.: Global supersonic conic shock wave for the steady supersonic flow past a cone: polytropic gas, Journal of Differential Equations, 246 (2) (2009) 641-669. 1 Qu, A. and Yuan, H.: Radon measure solutions for steady compressible Euler equations of hypersonic-limit conical flows and Newton's sine-squared law, Journal of Differential Equations, 269 (2020) 495-522. 7 Ruban, Anatoly I. and Gajjar, Jitesh S. B.: Fluid Dynamics, Part 1: Classical fluid dynamics, First edition, Oxford University Press, 2014.
http://arxiv.org/abs/2307.00662v1
20230702204401
Numerical Association Rule Mining: A Systematic Literature Review
[ "Minakshi Kaushik", "Rahul Sharma", "Iztok Fister Jr.", "Dirk Draheim" ]
cs.LG
[ "cs.LG", "cs.DB" ]
minakshi.kaushik@taltech.ee 0000-0002-6658-1712 rahul.sharma@taltech.ee Tallinn University of Technology Akadeemia tee 15a, 12618 Tallinn, Estonia University of Maribor Koroska cesta 46, SI-2000 Maribor, Slovenia iztok.fister1@um.si dirk.draheim@taltech.ee Tallinn University of Technology Akadeemia tee 15a, 12618 Tallinn, Estonia Numerical association rule mining (NARM) is a widely used variant of the association rule mining (ARM) technique, and it has been extensively used in discovering patterns in numerical data. Initially, researchers and scientists incorporated numerical attributes in ARM using various discretization approaches; however, over time, a plethora of alternative methods have emerged in this field. Unfortunately, the increase of alternative methods has resulted into a significant knowledge gap in understanding diverse techniques employed in NARM – this paper attempts to bridge this knowledge gap by conducting a comprehensive systematic literature review (SLR). We provide an in-depth study of diverse methods, algorithms, metrics, and datasets derived from 1,140 scholarly articles published from the inception of NARM in the year 1996 to 2022. Out of them, 68 articles are extensively reviewed in accordance with inclusion, exclusion, and quality criteria. To the best of our knowledge, this SLR is the first of its kind to provide an exhaustive analysis of the current literature and previous surveys on NARM. The paper discusses important research issues, the current status, and the future possibilities of NARM. On the basis of this SLR, the article also presents a novel discretization measure that contributes by providing a partitioning of numerical data that meets well human perception of partitions. <ccs2012> <concept> <concept_id>10002951.10003227.10003351.10003443</concept_id> <concept_desc>Information systems Association rules</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10010147.10010257.10010293.10011809</concept_id> <concept_desc>Computing methodologies Bio-inspired approaches</concept_desc> <concept_significance>300</concept_significance> </concept> <concept> <concept_id>10002951.10003227.10003351</concept_id> <concept_desc>Information systems Data mining</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10002944.10011122.10002945</concept_id> <concept_desc>General and reference Surveys and overviews</concept_desc> <concept_significance>300</concept_significance> </concept> <concept> <concept_id>10010147.10010257</concept_id> <concept_desc>Computing methodologies Machine learning</concept_desc> <concept_significance>100</concept_significance> </concept> </ccs2012> [500]Information systems Association rules [300]Computing methodologies Bio-inspired approaches [500]Information systems Data mining [300]General and reference Surveys and overviews [100]Computing methodologies Machine learning Numerical Association Rule Mining: A Systematic Literature Review Dirk Draheim August 1, 2023 ================================================================= § INTRODUCTION Decision-makers have used a wide variety of data mining techniques to extract valuable insights from data. Out of these techniques, association rule mining (ARM) is one of the established data mining techniques. ARM was first proposed by R. Agrawal <cit.>, and it is primarily used to identify interesting relationships between various data items, e.g., market basket analysis. Later, it has also been used in medical diagnosis and bioinformatics. In the original settings of ARM, classical algorithms such as Apriori <cit.>, Eclat <cit.> and FP-growth <cit.> were limited to work with boolean datasets only and do not support numerical data items like height, weight, or age. To extend the scope of ARM to support numerical items, Srinkant et al. <cit.> proposed a new technique “quantitative association rule mining (QARM)”. In this technique, numerical data items are converted to categorical data through a discretization process. In literature, QARM is also referred to as “numerical association rule mining (NARM)” <cit.>. In the early stages of research on NARM, researchers and scientists have used various discretization approaches. However, as time progressed, a wide range of alternative methods emerged, offering novel and innovative solutions in this field. Unfortunately, the increased number of alternative methods has created a substantial knowledge gap, making it difficult to fully comprehend the diverse range of techniques utilised in NARM. To address this knowledge gap, this paper conducts a comprehensive systematic literature review (SLR) by following one of the established research methodologies for SLR as outlined by Kitchenham and Charters’s <cit.>. Before conducting this SLR, we thoroughly reviewed several surveys and reviews on NARM, which are listed in Table <ref>. However, it is important to note that these existing surveys and reviews have certain limitations. They often lack well-defined research questions, comprehensive search strategies, and rigorous research methodologies. Notably, to the best of our knowledge, no SLR of the existing literature on NARM has been conducted to date. The absence of a systematic review in the field has highlighted the need for this article and inspired us to fill this knowledge gap. Indeed, the identified limitations in previous surveys and reviews raised the importance of conducting SLR on NARM. Through this SLR, we aim to address these limitations and fulfil the need for a more comprehensive understanding of the field. In order to provide a complete overview of the NARM literature, we conducted a systematic search across various academic databases and digital libraries to identify relevant scholarly articles. We majorly focused on articles published from the inception of NARM in 1996 up until 2022. In total, we identified 1,140 articles that met our search queries. Next, as per the research methodology, we applied a rigorous process of inclusion, exclusion, and quality assessment criteria to ensure that the selected articles were relevant to the research domain and of high quality. After the screening process, we narrowed down the initial list to a final selection of 68 articles. By following this systematic approach, we aimed to gather a comprehensive and reliable set of articles that contributes to the thorough analysis and synthesis of existing knowledge on NARM. Based on the exhaustive analysis of 68 articles, this SLR provides an in-depth examination of diverse methods, algorithms, metrics, and datasets utilised in NARM. We thoroughly evaluate the strengths and weaknesses of these methods, algorithms, and metrics while also highlighting their outcomes and potential applications. By conducting a comprehensive analysis of the available literature, we aim to provide deep insights and understanding that can benefit researchers, practitioners, and stakeholders in the field. As per the findings of this SLR, the article also contributes by introducing an automated novel discretization measure that addresses the human perception of partitions, providing a meaningful and accurate partitioning of numerical data. This novel measure aims to overcome the limitations of existing methods by providing a more meaningful and accurate partitioning of numerical data. The primary contributions of this paper are as follows. * Well-defined research questions and a methodology for extracting data for a systematic investigation in the area of mining numerical association rules. * Detailed knowledge about NARM methods and their algorithms. * Identified popular metrics to evaluate NARM algorithms. * Identified the major challenges involved in generating numerical association rules, along with some probable future perspectives. * A novel automated measure is presented for discretizing numerical attributes to contribute to NARM by providing a partitioning of numerical data that meets well human perception of partitions. * Fills the gaps and overcomes the limitations of previous surveys. The article is organized as follows: Section <ref> presents an overview of the background and related work. In Section <ref>, we detail our research methodology and articulate the research questions (RQs). Section <ref> presents the findings of the review. In Section <ref>, we address the potential threats to the validity of this research article. Section <ref> delves into a comprehensive discussion of the SLR's findings. Finally, we draw our conclusions in Section <ref>. § BACKGROUND AND RELATED WORK In this section, we provide an in-depth explanation of the background of ARM and NARM. §.§ Association Rule Mining In the original setting, association rules are extracted from transactional datasets composed of a set I = {i_1,…,i_n} of n binary attributes called items and a set D = {t_1,…,t_n}, t_k⊆ I, of transactions called database. An association rule is a pair of itemsets (X, Y), often denoted by an implication of the form X⇒ Y, where X is the antecedent (or premise), Y is the consequent (or conclusion) and X∩ Y = ∅. In ARM, support and confidence measures are widely utilized and considered fundamental metrics. The support of an itemset X determines how frequently the itemset appears in a transactional database. The support of an association rule X ⇒ Y can be defined as the percentage of transactions among the total records that contain both itemsets X and Y, shown in Eq. <ref>. The confidence of an association rule X ⇒ Y determines how frequently items in Y appear in transactions that contain X. The confidence of a rule is calculated as the percentage of transactions that contain itemset X also contain itemset Y, to the total number of records that contain X shown in Eq. <ref>. Support ( X ⇒ Y) = |(X∪ Y)|/| D | Confidence(X ⇒ Y) = |(X∪ Y)|/| X | §.§ Numerical Association Rule Mining NARM came into the scenario to extract association rules from numerical data. Unlike the classical ARM, numerical ARM allows attributes to be categorical (e.g., gender, education) or numeric (e.g., salary, age) rather than just Boolean. A numerical association rule is an implication of the form X ⇒ Y, in which both antecedent and consequent parts are the set of attributes in the forms A = {v_1, v_2, … v_n} if A is a categorical attribute, or A ϵ  [v_1,v_2] if A is numeric attribute. An example of a numerical association rule is given below. Age ∈ [21,35] ∧ Gender:[Male] ⇒Salary∈ [2000,3000] (Support=10%, Confidence=80%) This rule states that those employees who are males, aged between 21 and 35 and having salaries between $2,000 and $3,000 form 10% of all employees; and that 80% of males aged between 21 and 35 are earning between $2,000 and $3,000. Here, Age and Salary are numerical attributes and Gender is a categorical attribute. In ARM, except for support and confidence, more than fifty measures of interestingness are available in the literature <cit.>. The support of an association rule X ⇒ Y determines how frequently the itemset appears in a transactional database. The confidence of an association rule determines how many transactions that contain X also contain Y. §.§ Related Work In recent years, there have been few surveys and studies in the literature that have focused on NARM approaches and their comparison. However, no SLR has been published to date. Our automated search identified three reviews <cit.> and a manual search found two surveys <cit.>. While these reviews provide a contribution towards understanding the methods and algorithms for NARM, they have several limitations, as outlined in Table <ref>. Gosain et al. <cit.> presented a survey of association rules on quantitative data in 2013. The authors focused on different types of association rules but did not include NARM methods and algorithms. Adhikary and Roy<cit.> reviewed QARM techniques, with a focus on applications in the real world, while their 2015 survey <cit.> provided a classification of QARM techniques but lacked valuable information. A systematic assessment of the three popular methods for NARM with thirty algorithms was conducted in <cit.>. This review focused only on NARM algorithms, and the steps of systematic reviews were not followed. In contrast, our study conducted an SLR under the guidelines of Kitchenham and Charters <cit.> and answered the research questions in the state of the art of NARM. Moreover, it is worth noting that some authors have made notable contributions to NARM under alternative names. For example, Telkani et al. <cit.> conducted an extensive survey on evolutionary computation for ARM, wherein they thoroughly examined various approaches within the realm of ARM, including NARM, and provided insights into the the classification of evolutionary algorithms in this context. § RESEARCH METHODOLOGY In this work, we adopt research methodology based on Kitchenham and Charters's guidelines <cit.>. The main goal of this SLR is to summarize the existing evidence in the literature regarding this topic and to identify gaps in the literature. According to Kitchenham's guidelines, the process included three main phases: planning, conducting, and reporting the review. The planning phase involved identifying the need for the review and establishing a review protocol. The conducting phase involved following the review protocol, which included selecting a primary search, assessing the quality of the studies, and extracting relevant data. Finally, the reporting phase focused on formatting and evaluating the report in accordance with the guidelines. §.§ Planning the Review The initial phase of this study aims to justify the need for an SLR and define the research questions. Based on the objective and motivation of this study, we formulated the following research questions with the goal presented in Table <ref>. The primary aim of this SLR is to address these research questions, which will help us to comprehensively understand the existing research and identify gaps in the literature related to NARM. §.§ Conducting the Review The review phase involves a series of sequential steps, beginning with the identification of relevant research and followed by the selection of studies, study quality assessment, and data extraction. These steps are conducted systematically to ensure the comprehensive coverage of relevant studies and the extraction of accurate and reliable information for analysis. §.§.§ Search Strategy Academic Databases To conduct the review phase, we conducted a thorough search of scientific publications from relevant journals and conferences, utilizing multiple reputable digital libraries, including the ACM Digital Library, Scopus, SpringerLink, IEEE Xplore, and ScienceDirect. Additionally, we performed a manual search on Google Scholar to minimize the chance of overlooking any significant articles. The search was conducted between April and June 2022, focusing on articles published in journals and conferences. We set the time frame for articles published from 1996 to 2022, as it was in 1996 when Srikant and Agrawal <cit.> first presented the problem statement concerning numerical attributes. Search Strings For the search process, we derived the search terms from the research questions and compiled a comprehensive list of synonyms, abbreviations, and alternative words. In this study, we have also mentioned that the problem of handling numerical attributes was initially addressed as “quantitative association rule” by Srikant and Agrawal <cit.>. Over time, this term evolved into “numerical association rules.” Therefore, to ensure inclusivity, our search terms included variations such as "quantitative association rule mining," OR "numerical association rule mining," OR "quantitative association rules," OR "numerical association rules," OR "quantitative ARM," OR "numerical ARM," OR "QARM," OR "NARM." We targeted these terms in the abstracts, titles, and keywords of articles within the following electronic sources. * ACM Digital Library[<http://dl.acm.org>] * IEEE eXplore[<http://ieeexplore.ieee.org>] * Scopus[<http://www.scopus.com>] * SpringerLink[<http://www.link.springer.com/>] * ScienceDirect[<https://www.sciencedirect.com/>] * Google Scholar[<https://scholar.google.com/>] Search Process Our search was specifically conducted for articles written in English, limited to the period between 1996 and 2022, within the subject area of Computer Science, focusing on the final publication stage. The search query and terms used in Scopus are outlined in Table <ref>. Through a meticulous search process, we successfully identified a total of 1,628 articles. Following the elimination of 488 redundant articles, we narrowed down the selection to 1,140 articles. Table <ref> provides a breakdown of the number of articles obtained from each respective database. §.§.§ Selection Based on Inclusion and Exclusion Criteria To ensure the relevance of the articles, we conducted an initial screening process by carefully reviewing the abstracts and conclusions. We applied the predetermined Inclusion and Exclusion Criteria, which are outlined in Table <ref>. These criteria are widely accepted and primarily focus on aligning with the scope of the study. Non-peer-reviewed articles, such as theses and abstracts, were excluded from our analysis. Additionally, we also excluded works that combined results from both journals and conferences, such as monographs and books. Following the application of these inclusion and exclusion criteria, we were left with a final set of 96 articles that met our selection criteria. Next, to ensure a comprehensive review, we conducted a thorough examination of the references cited in the selected primary studies. This step aimed to identify any significant publications that might have been missed during the initial search. As a result, we identified 14 additional papers that fulfilled our inclusion criteria. These studies were subsequently incorporated into our list of primary studies, expanding the total number of articles to 110. §.§.§ Selection based on Quality Assessment The objective of the quality assessment phase is to ensure the inclusion of unbiased and relevant studies in the review. To accomplish this, we established a set of criteria to evaluate the quality of the papers, refine our search results, and assess the relevance and rigour of the included papers. Following the initial selection based on the predefined inclusion and exclusion criteria, we conducted a thorough reading of the entire article. During this phase, we utilized a quality assessment checklist comprising five criteria, as outlined in Table <ref>, to refine our search results. Each criterion was evaluated using “Yes,” “No,” or “Partially” responses, which corresponded to scores of 1, 0, or 0.5, respectively. Articles with scores of 2.5 or higher were selected as the final primary studies. Through this rigorous quality assessment process, we determined a total of 68 articles that met our selection criteria and were deemed as the final primary studies. The list of final articles is available in the GitHub repository[<https://github.com/minakshikaushik/List-of-Final-selected-articles.git>]. §.§.§ Data Extraction and synthesis In the last phase, we extracted pertinent information from the selected articles that successfully passed the quality assessment. This information was utilized to generate a comprehensive summary of our findings. Each chosen article was downloaded and thoroughly examined. Table <ref> provides an overview of the extracted data from each publication, highlighting its relevance to the respective research questions. For a more in-depth analysis of the collected data and the synthesis of our findings, we encourage readers to refer to Sections <ref> and <ref>. These sections provide a detailed presentation of the information gathered from the final set of articles, offering valuable insights into the research questions and facilitating a comprehensive understanding of our review's outcomes. p1cmp1cm3cm List of final selected studies. ID Title Publication type 3c ID Title Publication type ID Title Publication type SS1 Mining numeric association rules with genetic algorithms Conference SS2 Discovering Numeric Association Rules via Evolutionary Algorithm Conference SS3 An efficient genetic algorithm for automated mining of both positive and negative quantitative association rules Journal SS4 QuantMiner: A Genetic Algorithm for Mining Quantitative Association Rules Conference SS5 Genetic algorithm-based strategy for identifying associationrules without specifying actual minimum support Journal SS6 Mining quantitative association rules based on evolutionary computation and its application to atmospheric pollution Journal SS7 An evolutionary algorithm to discover quantitative association rules in multidimensional time series Journal SS8 An evolutionary algorithm to discover quantitative association rules from huge databases without the need for an a priori discretization Journal SS9 Mining numerical association rules via multi-objective genetic algorithms Journal SS10 QAR-CIP-NSGA-II: A new multi-objective evolutionary algorithm to mine quantitative association rules Journal SS11 A New Multiobjective Evolutionary Algorithm for Mining a Reduced Set of Interesting Positive and Negative Quantitative Association Rules Conference SS12 Improving a multi-objective evolutionary algorithm to discover quantitative association rules Journal SS13 Nicgar: A niching genetic algorithm to mine a diverse set of interesting quantitative association rules Journal SS14 Differential Evolution for Association Rule Mining Using Categorical and Numerical Attributes Conference SS15 A genetic algorithm-based framework for mining quantitative association rules without specifying minimum support and minimum confidence Journal SS16 Differential evolution and sine cosine algorithm based novel hybrid multi-objective approaches for numerical association rule mining Journal SS17 MODENAR: Multi-objective differential evolution algorithm for mining numeric association rules Journal SS18 Rough particle swarm optimization and its applications in data mining Journal SS19 Chaotically encoded particle swarm optimization algorithm and its applications Journal SS20 Numerical association rule mining from a defined schema using the VMO algorithm Journal SS21 Association Rule Mining for Continuous Attributes using Genetic Network Programming Journal SS22 Reducing gaps in quantitative association rules: A genetic programming free-parameter algorithm Journal SS23 Multi-objective Numeric Association Rules Mining via Ant Colony Optimization for Continuous Domains without Specifying Minimum Support and Minimum Confidence Journal SS24 Multi-objective PSO algorithm for mining numerical association rules without a priori discretization Journal SS25 MOCANAR: A MULTI-OBJECTIVE CUCKOO SEARCH ALGORITHM FOR NUMERIC ASSOCIATION RULE DISCOVER Conference SS26 Rare-PEARs: A new multi-objective evolutionary algorithm to mine rare and non-redundant quantitative association rules Journal SS27 Wolf search algorithm for numeric association rule mining Conference SS28 Automatic Mining of Quantitative Association Rules with Gravitational Search Algorithm Journal SS29 Multi-objective bat algorithm for mining numerical association rules Journal SS30 PPQAR: Parallel PSO for Quantitative Association Rule Mining Journal SS31 Multi-objective particle swarm optimization algorithm using adaptive archive grid for numerical association rule mining Journal SS32 Improved optimization of numerical association rule mining using hybrid particle swarm optimization and cauchy distribution Journal SS33 A novel hybrid GA–PSO framework for mining quantitative association rules Journal SS34 DCSA-QAR: A Discret Crow Search Algorithm for Mining Quantitative Association Rules SS35 Chaos numbers based a new representation scheme for evolutionary computation: Applications in evolutionary association rule mining Journal SS36 Mining Quantitative Association Rules in Large Relational Tables Conference SS37 Clustering Association Rules Conference SS38 An effective algorithm for mining interesting quantitative association rules Conference SS39 Association Rules over Interval Data Conference SS40 Discovery of association rules over ordinal data: A new and faster algorithm and its application to basket analysis Conference SS41 Interestingness-Based Interval merger for Numeric Association Rules Conference SS42 Mining Fuzzy Association Rules Conference SS43 Mining fuzzy association rules in databases Journal SS44 An Adaptive Method of Numerical Attribute Merging for Quantitative Association Rule Mining Conference SS45 Mining Optimized Association Rules for Numeric Attributes Journal SS46 Mining fuzzy quantitative association rules Conference SS47 Mining association rules from quantitative data Journal SS48 Relative Unsupervised Discretization for Association Rule Mining Conference SS49 A fuzzy approach for mining quantitative association rules Journal SS50 Mining Optimized Gain Rules for Numeric Attributes Journal SS51 An efficient algorithm for finding dense regions for mining quantitative association rules Journal SS52 An Effective Algorithm for Mining Quantitative Association Rules Based on High Dimension Cluster Conference SS53 A method for mining association rules in quantitative and fuzzy data Conference SS54 An effective algorithm for mining quantitative associations based on subspace clustering Conference SS55 Optimized fuzzy association rule mining for quantitative data Conference SS56 An Effective Method for Mining Quantitative Association Rules with Clustering Partition in Satellite Telemetry Data Conference SS57 Fuzzy Inference Algorithm Based on Quantitative Association Rules Journal SS58 Combining Graph Clustering and Quantitative Association Rules for Knowledge Discovery in Geochemical Data Problem Journal SS59 Machine Learning Based Quantitative Association Rule Mining Method for Evaluating Cellular Network Performance Journal SS60 A Statistical Theory for Quantitative Association Rules Journal SS61 Bipartition techniques for quantitative attributes in association rule mining Conference SS62 Cognitive Computing and Rule Extraction in Generalized One-sided Formal Contexts Journal SS63 An information-theoretic approach to quantitative association rule mining Journal SS64 Discovering Associations with Numeric Variables Conference SS65 Mining generalized fuzzy quantitative association rules with fuzzy generalization hierarchies Conference SS66 Discovering and Managing Quantitative Association Rules Conference SS67 Fuzzy clustering-based discretization for gene expression classification journal § REPORTING THE REVIEW The reporting phase is crucial as it involves the final presentation and evaluation of the findings obtained from the systematic review. Effectively communicating the results is essential to highlight the contribution of the review and provide valuable insights to readers. These results are derived from the studies identified during the review phase and are aligned with the pre-defined research questions. Through clear and concise reporting, the systematic review aims to enhance understanding and facilitate informed decision-making. §.§ RQ1.Which methods exist for solving NARM problems? The selected studies, which are reviewed to examine the existing methods in NARM, are summarized in the subsequent sub-sections. Table <ref> provides an overview of the included papers pertaining to different NARM methods. Following a thorough analysis of these studies, it was determined that they could be broadly categorized into four main methods. The following subsections provide brief descriptions of these methods. §.§.§ The Discretization Method Classical ARM faces a significant limitation when dealing with continuous variable columns as they cannot be processed directly and must be converted into binary form first. To address this issue, researchers have turned to the discretization method <cit.>. Discretization involves dividing a column of numeric values into meaningful target groups, which facilitates the identification and generation of association rules. This approach helps to understand numeric value columns easily, but the groups are only useful if the variables in the same group do not have any objective differences. Additionally, discretization minimizes the impact of trivial variations between values. The discretization method for mining numerical association rules can be categorized into four approaches: partitioning, clustering, fuzzifying and hybrid. In this article, we have selected 28 relevant studies that focus on the discretization method. Partitioning Approach Srikant <cit.> presented a solution for mining association rules from quantitative data sets. The approach involved partitioning the numerical attributes into intervals and subsequently mapping these intervals into binary attributes. To address the information loss resulting from partitioning, the authors introduced the concept of the partial completeness measure. By partitioning the numerical attributes and mapping them into binary attributes, Srikant's approach allowed for the application of traditional ARM techniques to quantitative data. This work laid the foundation for handling numerical attributes in ARM and has since influenced further developments in the field. Clustering Approach The clustering approach is utilized to divide a numerical column into distinct groups based on similarity among values. Various clustering techniques, including merging-based, density-based, and grid-based clustering, can be employed to achieve this goal. From the clustering approach, we identified nine relevant articles that explore this methodology. In the merging and splitting-based concept, intervals are merged initially and then subsequently split based on specific criteria. Wang and Han proposed the notion of merging adjacent intervals in their work <cit.>. Li et al. <cit.> developed a method that identifies intervals of numeric attributes and merges adjacent intervals exhibiting similar characteristics based on predefined criteria. These studies contribute to the understanding and advancement of the merging and splitting-based approach within the context of NARM. The density-based clustering aims to identify different dense regions within the dataset and map these regions to numeric association rules. Algorithms such as DRMiner <cit.>, DBSMiner <cit.>, and MQAR <cit.> are examples of techniques proposed within this category. Further details regarding these algorithms will be provided in response to the subsequent research question. On the other hand, grid-based clustering utilizes a bitmap grid to handle data clustering. It identifies clusters within the bitmap grid, which subsequently yield association rules. This method offers an alternative approach for extracting meaningful associations from numerical attributes. Fuzzy Approach The fuzzy approach is employed to tackle the issue of sharp boundaries in ARM by representing numerical values as fuzzy sets. Fuzzy sets allow for the representation of intervals with non-sharp boundaries, where an element can possess a membership value indicating its degree of belonging to a set. Hong et al. <cit.> applied the fuzzy concept in conjunction with the apriori algorithm to discover fuzzy association rules from a quantitative dataset. Their work demonstrated the effectiveness of combining fuzzy sets and ARM techniques for extracting valuable insights from numerical data. Hybrid Approach The hybrid approach for solving NARM problems is the combination of two or more methods such as clustering, partitioning, and fuzzy approaches. This method is a more flexible approach that can enhance the efficiency and accuracy of ARM. For instance, <cit.> combined the fuzzy approach with the partitioning method to develop an efficient algorithm for mining fuzzy association rules. On the other hand, <cit.> utilized the fuzzy approach with clustering to enhance the accuracy of ARM. The hybrid approach in NARM offers a promising direction for researchers to explore, as it allows for the utilization of complementary techniques to address the complexities of mining association rules from numerical data. §.§.§ The Optimization Methods In the context of NARM, the optimization method has gained significant attention, and we identified 34 papers out of the 68 studies reviewed that focused on optimization methods. These methods utilize heuristic algorithms inspired by various natural phenomena, such as animal movements and biological behavior. Generally, optimization methods fall into two categories: bio-inspired and physics-based. Depending on the optimization goals, the optimization methods can be further classified into single-objective and multi-objective approaches. Bio-inspired optimization methods consist of approaches based on Swarm Intelligence (SI), Evolutionary algorithms, and Hybrid methods. These methods draw inspiration from the collective behavior of organisms in nature. For example, some studies have explored algorithms inspired by the movements of wolves <cit.>, insects <cit.>, and mining behavior in biological systems <cit.>. The physics-based optimization methods apply principles from physics to solve optimization problems. These approaches offer researchers a diverse range of techniques to explore and apply in NARM, allowing for the discovery of efficient and effective association rules from numerical data. Evolution-Based Methods The evolutionary method in NARM is rooted in Darwin's theory of natural selection, which highlights the adaptive nature of living organisms in response to changing environments. This approach employs biological operators, including crossover, mutation, and selection, to mimic the evolutionary process in optimization algorithms <cit.>. By applying these principles, evolutionary methods aim to enhance the effectiveness and efficiency of NARM algorithms, allowing for the discovery of valuable association rules from numerical data. Under the evolution-based method, the genetic algorithm (GA) and differential evolution (DE) provide detailed solutions for the NARM problem. The optimization method aims to discover association rules without the need for the prior discretization of numerical attributes. GA, a meta-heuristic inspired by natural selection and genetic structure, evolves a population of individual solutions over time <cit.>. It proceeds in three main steps: selection of parent individuals, crossover to combine parents for the next generation, and mutation to apply random changes to parents and form children. In 2001, the concept of genetic algorithms was successfully applied to identify numerical association rules from numerical attributes <cit.>. Out of the selected studies, 17 refers to the use of genetic algorithms. Initially, NARM algorithms focused solely on single-objective problems; later, multi-objective algorithms also came into the scenario <cit.>. Over the years, the genetic algorithm has been used with some advancement by integrating various supporting techniques, such as the binary-coded CHC algorithm <cit.>, non-dominated sorting genetic algorithm <cit.>, and niching genetic algorithm <cit.>, as well as other multi-objective genetic algorithms. Genetic programming <cit.>, which utilizes a tree structure for the genome, is another aspect of the genetic algorithm. Grammar-guided genetic programming <cit.> also emerged with NARM in 2004. In 1997, Storn and Price <cit.> introduced a global optimization meta-heuristic approach that effectively minimized non-differentiable, non-linear, and multi-modal cost functions. This approach utilized the same operator as genetic algorithms, which included crossover, mutation, and selection. To minimize the function, differential evolution (DE) employed a few control variables and parallelization techniques, which helped to decrease computing costs and quickly converge on the global minimum. Our research identified four relevant studies that used DE for NARM. One such study, proposed in 2008 by Alatas and Akin, utilized a multi-objective differential evolution algorithm <cit.>. Another study was conducted in 2018 and 2021 by I. Fister Jr. <cit.>, while Altay and Alatas presented a hybrid DE-based method with a sine cosine algorithm and chaos number-based encoding, respectively <cit.>. Swarm Intelligence-Based Swarm intelligence (SI) is a popular optimization technique inspired by the collective behavior of self-organized groups in nature, as described by Bonabeau et al. in 1999 <cit.>. SI algorithms emulate the behavior of swarms found in birds, fish, honey bees, and ant colonies. These algorithms consist of individuals that migrate through the search space, simulating the progression of the swarm. Various SI-based algorithms have been developed, including Particle Swarm Optimization (PSO), Bat Algorithm (BAT), Ant Colony Optimization (ACO), Cat Swarm Optimization (CSO), and others. In the context of solving NARM problems, several SI algorithms have been applied. Notable examples include PSO <cit.>, BAT <cit.>, Wolf Search Algorithm (WSA) <cit.>, Crow Search Algorithm (CSA) <cit.>, and Cuckoo Search Algorithm (CS)<cit.>. These SI-based algorithms have shown promise in optimizing NARM and extracting meaningful association rules. Particle swarm optimization (PSO) is a widely used optimization technique for non-linear continuous functions inspired by the movement of bird flocks or fish schools as described in Kennedy and Eberhart <cit.>. PSO simulates the collective behaviour of these groups, where N particles move in a D-dimensional search space, adjusting their position iteratively by using their own best position pbest and the best position of the entire swarm gbest. The PSO algorithm finds the optimum solution by calculating the velocity and position of each particle. In the context of mining association rules with numeric attributes, Alatas and Akin introduced the application of PSO in 2008 <cit.>. They modified the PSO algorithm to search for numeric attribute intervals and discover numeric association rules. Seven studies have since focused on adapting PSO for NARM including the hybrid approach. These studies explore the potential of PSO to effectively mine association rules with numeric attributes and provide valuable insights into its performance and limitations. Ant colony optimization (ACO) is another optimization technique based on the foraging behaviour of various ant species, as described in Dorigo et al.<cit.>. In ACO, a group of artificial ants collaborates to find solutions to an optimization problem and communicate information about the quality of these solutions using a communication mechanism similar to real ants. ACO is designed to address discrete optimization problems by selecting a solution using a discrete probability distribution. In the context of multi-objective NARM, Moslehi et al. introduced an ACO variant called ACO_R in 2011 <cit.>. ACO_R utilizes a Gaussian probability distribution function to handle continuous values encountered in NARM. It maintains a solution archive of size k, initially populated with k random solutions ranked by their quality. Each ant constructs its solution by probabilistically selecting a solution from the archive, allowing for the exploration of different solution possibilities. The utilization of ACO in NARM, particularly the ACO_R variant, demonstrates its potential to address the challenges posed by continuous attributes and provide effective solutions for multi-objective NARM problems. The Cuckoo Search algorithm (CS) is an optimization algorithm introduced by Yang and Deb in 2009, inspired by the brooding parasitic behavior of cuckoo species <cit.>. Cuckoos lay their eggs in the nests of other bird species, mimicking the color and pattern of the host birds' eggs. Some host birds may recognize the stranger's eggs and remove them from the nest. The cuckoo search algorithm mimics this behavior by generating new solutions (cuckoo eggs) and replacing less promising solutions in the nests (solution space) with the new solutions. The algorithm operates based on three main rules: A cuckoo bird lays only one egg at a time in a randomly chosen nest (introduces a new solution to the search space). The nests with high-quality eggs are more likely to be carried over to the next generation (the better solutions have a higher chance of survival). The probability of a host bird discovering cuckoo eggs in its nest is either 0 or 1 (either the host bird finds and removes the cuckoo egg or it remains undetected). The goal of the cuckoo search algorithm is to find new and potentially better solutions to replace the existing solutions in the nests, leading to the improvement of the overall solution quality. In the context of NARM, a multi-objective cuckoo search algorithm called MOCANAR was proposed by Kahvazadeh et al. in 2015 <cit.>. MOCANAR applies a Pareto-based approach to solve the multi-objective NARM problem, aiming to discover association rules that optimize multiple conflicting objectives simultaneously. By employing the cuckoo search algorithm as the underlying optimization technique, MOCANAR demonstrates its effectiveness in addressing the challenges of multi-objective NARM. In 2012, Tang et al. proposed a heuristic optimization algorithm called the Wolf Search Algorithm (WSA) that imitates how wolves hunt for food and survive in the wild by avoiding predators <cit.>. Unlike other bio-inspired meta-heuristics, WSA enables both individual local searching and autonomous flocking movement capabilities as wolves hunt independently in groups. WSA follows three basic rules based on wolf hunting behavior. The first rule involves a fixed visual area of each wolf with a radius v, which is calculated using Minkowski distance. The second rule pertains to the current position of the wolf, represented by the objective function's fitness, and the wolf always tries to choose the better position. The third rule concerns escaping from enemies. Agbehadji suggested WSA to develop an algorithm for searching for intervals of numeric attributes and association rules <cit.>. In 2010, Yang introduced the BAT algorithm (BA) as a solution to continuous constrained optimization problems inspired by the echolocation behavior of microbats <cit.>. Microbats use echolocation to sense distance, discover prey, avoid obstacles, and find roosting nooks in the dark. The BA algorithm is based on the velocity of a bat at a particular position, with a fixed frequency and varying wavelength and loudness. The bat adjusts its frequency and loudness to locate a new food source while changing its position in space. Heraguemi et al. <cit.> proposed a multi-objective version of the Bat algorithm for numerical attributes. Previously, the BA was also used for ARM to deal with categorical attributes. The Crow Search Algorithm (CSA) is a recently developed meta-heuristic optimization technique inspired by the intelligent behaviour of crows <cit.>. Crows are known for their ability to store and hide food for future use while also keeping an eye on each other to steal food. The CSA is based on four principles of crow behaviour: living in flocks, memorizing the position of hiding places, following other crows to steal food, and protecting their caches from theft. In the CSA, a crow flock moves in a d-dimensional search space, with each crow having its own position and memory of its hiding place. When a crow follows another crow, it may either discover the hiding place and memorize it or be tricked by the followed crow. The CSA has been successfully applied to various optimization problems, such as image segmentation and feature selection. Recently, Makhlouf et al. (2021) <cit.> proposed a discrete version of CSA for NARM. Hybrid Approach The hybrid approach in optimization combines multiple techniques such as evolution, SI, or other approaches to leverage their respective advantages and tackle complex tasks effectively. In the context of NARM, researchers have explored the hybridization of different algorithms to enhance the performance and efficiency of association rule discovery. One study by Moslehi et al. <cit.> employed a hybrid approach that combined the GA and PSO. The GA facilitated the search for the best solution, while the PSO helped avoid being trapped in local optima by exploring a larger search space. By combining the strengths of both approaches, the hybrid algorithm demonstrated the ability to find high-quality solutions to complex NARM problems within a relatively short time. Another study by Altay and Alatas <cit.> proposed a hybrid approach that combined the DE algorithm with the sine and cosine algorithms. This hybridization aimed to leverage the exploration and exploitation capabilities of both algorithms, resulting in improved performance for NARM. The DE algorithm provided efficient search and optimization, while the sine and cosine algorithms introduced chaos-based techniques to enhance the exploration process. Physics-Based Physics-based meta-heuristics have emerged as a powerful approach to solving optimization problems. One such algorithm, the gravitational search algorithm (GSA) <cit.>, is based on Newton's law of gravity, where particles attract each other with a gravitational force. The following formula defines this force: F= G M_1 M_2/R^2 where F is the gravitational force, G is the gravitational constant, M_1 and M_2 are the mass of of two particles and R is the distance between these particles. According to Newton's second law, when a force is applied to a particle, its acceleration a depends on the force F, and it is mass M. a= F/M In the GSA, agents are considered as objects with masses that determine their performance. The heavier masses are better solutions and attract lighter masses, leading to an optimal solution. Each mass has a position, inertial, active, and passive gravitational mass. The position of a mass represents a problem solution, and its gravitational and inertial masses are calculated using a fitness function. While GSA has been used in various optimization problems, it has only been applied to NARM in one study, where Can and Alatas utilized it for finding intervals of numeric attributes automatically without any prior processing <cit.>. §.§.§ The Statistical Method Statistics is a traditional approach for developing theories and testing hypotheses using statistical tests such as Pearson correlation, regression, ANOVA, t-test, and chi-square test, among others. Statistical inference involves inferring population properties from a sample to generate estimates and test hypotheses. Some studies have used statistical concepts such as mean, median, and standard deviation in the mining association rule. We identified three studies in this direction which suggested distribution-based interestingness measures. One such study is Kang et al. (2009) <cit.>, which used bipartition techniques such as mean-based bipartition, median-based bipartition and standard deviation minimization for quantitative attributes in ARM. §.§.§ Miscellaneous Other Methods In addition to the established techniques discussed earlier, there are other alternative approaches that have been proposed to tackle the challenge of NARM. These approaches offer unique perspectives and methodologies to address the problem. One such approach is the utilization of mutual information, as presented by Yiping et al. in 2008 <cit.>. Mutual information is a concept from information theory that measures the dependency between two variables. In the context of NARM, mutual information is employed to generate quantitative association rules (QARs), capturing the relationships and dependencies between numerical attributes. Another approach is the use of Variable Mesh Optimization (VMO), proposed by Jaramillo et al. <cit.>. VMO is a population-based metaheuristic algorithm that represents solutions as nodes distributed in a mesh-like structure. Each node in the mesh represents a potential solution to the optimization problem. By leveraging the principles of VMO, the algorithm explores the solution space in a distributed and adaptive manner, facilitating the discovery of association rules. Furthermore, in 2021, Hu et al. <cit.> introduced a cognitive computing-based approach for NARM. Cognitive computing refers to the simulation of human thought processes by computer models. By leveraging cognitive computing techniques, the proposed approach aims to mimic the human thought process during critical situations, allowing for a more comprehensive and nuanced analysis of numerical data for ARM. These alternative approaches demonstrate the diverse range of methodologies and concepts that researchers have explored to tackle the NARM problem. By leveraging mutual information, variable mesh optimization, and cognitive computing, these approaches offer unique perspectives and potential benefits for discovering association rules from numerical data. §.§ RQ2 What are the several algorithms available for each of the existing NARM methods? In response to RQ1, we have provided a comprehensive explanation of the four main methods utilized in NARM in subsection <ref>. This section further delves into a more detailed exploration of the algorithms associated with each of these methods. §.§.§ The Discretization Method Partitioning Based Algorithms * Qunatitative Association Rule Mining (QARM): In 1996, Srikant and Agrawal proposed an algorithm <cit.> to address the use of numeric attributes in ARM, which was traditionally limited to binary attributes. One key issue was determining whether and how to partition a quantitative attribute while minimizing information loss by setting minimum support and confidence thresholds. To overcome this, the algorithm introduces a partial completeness measure. The algorithm converts categorical attributes to integers and partitions numerical attributes into intervals using an equi-depth discretization algorithm. Frequent itemsets are then generated by setting minimum support for each attribute and used to generate association rules. To ensure interesting and non-redundant rules, the algorithm employs an interesting measure called “greater-than-expected-values.” However, setting the user-supplied threshold too high can result in missed rules, while setting it too low can generate irrelevant rules. * Automatic Pattern Analysis and Classification System 2 (APACS2): To address the threshold issue, a novel algorithm named APACS2 was presented by Chan et al. <cit.>. This algorithm employed equal-width discretization to discover intervals of quantitative attributes without the need for user-defined thresholds. The quantitative attribute values were mapped to these intervals to obtain a new set of attributes. Each interval was described by the lower and upper bounds as a_1 = [l_1, u_1]. The APACS2 algorithm used adjusted difference analysis to identify interesting associations between items, which enabled it to generate both positive and negative association rules. * Q2: Buchter and Wirth <cit.> proposed the Q2 algorithm to work with multi-dimensional association rules over ordinal data. Q2 aimed to reduce the cost of counting a large number of buckets by only counting the buckets of successful candidates. First, apriori is used to identify all frequent boolean itemsets. Then, only the items in these sets are discretized based on the user's specifications. Q2-gen technique is used to generate a prefix tree that includes only the bucket combinations that need to be counted for the discretized items. The prefix tree is then used to count these bucket combinations in a single pass through the data. Finally, the prefix tree is used to produce all R-interesting rules. Unlike the hash tree used in QARM, Q2 uses a prefix tree to store quantitative itemsets. * Fukuda et al. Work: Fukuda et al. presented a novel algorithm <cit.> that computes two optimized ranges for numeric attributes. To achieve this, the algorithm uses randomized bucketing as a preprocessing step to compute the ranges for sorted data. The focus of the algorithm is on generating optimized rules of the format (A [v_1, v_1]) ∧ C_1 ⇒ C_2, where C_1 and C_2 are binary attributes and A is a numeric attribute. The main task of the algorithm is to generate thousands of equi-depth buckets and combine some of them to generate optimized ranges. The performance of the bucketing algorithm was compared with Naive Sort and Vertical Split Sort, and the algorithm demonstrated superior performance. * Brin's Algorithm: In 1999, Brin et al. proposed an optimized algorithm for mining one and two numeric attributes <cit.>. The focus of the algorithm was on optimizing gain rules, where the gain of a rule R is defined by the difference between the support of (antecedent ∧ consequent) and the support of antecedent, multiplied by the user-specified minimum confidence. To reduce the input size, a bucketing algorithm was employed. For one numeric attribute, the algorithm computes optimized gain rules, while for two numeric attributes, a dynamic programming algorithm was presented to compute approximate association rules. Although the algorithm was successful for one numeric attribute, it was not well-suited for large domain sizes in the case of two numeric attributes. * Numerical Attribute Merging Algorithm: Li et al. <cit.> developed an algorithm that merges adjacent intervals of numeric attributes based on a merging criterion that considers value densities and distances between values. They called this the numerical attribute merging algorithm and used it to find suitable intervals for the QARM algorithm. After discretizing the numeric attributes, this algorithm treats each interval as a boolean attribute, allowing them to work with classical ARM. * Rastogi's Algorithm: In 2002, Rastogi and Shim extended the work done by <cit.>. They presented efficient methods for reducing the search space during the computation of optimized association rules applicable to both categorical and numeric attributes. * Sliding Window Partitioning - Random Forest (SWP-RF) Algorithm: In a related study, Guanghui Fan et al. <cit.> proposed a machine learning-based QARM method called SWP-RF to identify factors that cause network deterioration. This method uses sliding window partitioning (SWP) to discretize continuous attributes into boolean values, followed by random forest (RF) feature importance to measure the association between key performance indicator (KPI) and key quality indicator (KQI). * Numerical Association Rule-Discovery: Song and Ge <cit.> proposed NAR-Discovery, a divide-and-conquer algorithm for mining numerical association rules. NAR-Discovery progresses in two phases. In the first phase, attributes are partitioned into a small number of large buckets, and then neighbouring buckets are mapped to an “item,” and apply a classical frequent itemset mining algorithm. In the second phase, only the outermost buckets of each rule are recursively partitioned, and some bounds and filtering are used to end the process. The authors improved performance by one to two orders of magnitude using optimization techniques. They developed a search based on a tree structure to manage rule derivations, and interesting rules were selected using an optimization technique based on temporary tables. NAR-Discovery was compared with QuantMiner <cit.> and claimed to discover all appropriate rules. Clustering Based Algorithms * Miller's Algorithm: Miller et al. <cit.> introduced a distance-based ARM approach for interval data in 1997. To handle the memory requirements, they utilized a B^+ tree data structure. The authors first used a clustering algorithm to identify intervals and then applied a standard ARM algorithm to extract association rules from these intervals. * Association Rule Clustering System (ARCS): In 1997, Lent et al. <cit.> introduced a comprehensive framework called ARCS that focused on rules with two quantitative attributes on the antecedent side and one categorical attribute on the consequent side. ARCS consists of four main components: binner, association rule engine, clustering, and verifier. In the binner phase, quantitative attributes are divided into bins using the equi-width binning method, and these bins are then mapped to integers. The BitOp algorithm is used to enumerate clusters from the grid and locate them within the Bitmap grid by performing bitwise operations, which results in clustered association rules. However, this method is limited to handling low-dimensional data and cannot handle high-dimensional data. * Interval Merger Algorithm: In 1998, Wang and Han <cit.> proposed an algorithm for merging adjacent intervals of numeric attributes by evaluating merging criteria. This algorithm has two phases: initialization and bottom-up merging. They used an M-tree, which is a modified B-tree, to efficiently find the best merge during the merging phase. Additionally, two interestingness measures, J_1 and J_2, were used to evaluate the interestingness of the discovered association rules. The higher the values for both measures, the more interesting the rule was considered to be. * Relative Unsupervised Discretization (RUDE): In 2000, Ludl et al. proposed the RUDE algorithm as a merging approach based on the merging and splitting technique <cit.>. The RUDE algorithm considers the interdependence of attributes and consists of three main steps. The first step is the pre-discretizing phase, where equal-width discretization is applied to the data. In the second step, called structure projection, the structure of each source attribute is projected onto the target attribute. This projection is then used to perform clustering on the target attribute, resulting in the gathering of split points in the split point list. Finally, in the postprocessing step, the split points are merged using predefined merging parameters. The RUDE algorithm was primarily used as a preprocessing step for the apriori algorithm. The association rules extracted from RUDE and apriori were combined to obtain the final results. * Dense Regions Miner (DRMiner): In 2005, Lian et al. <cit.> proposed the DRMiner algorithm, which efficiently identifies dense regions and maps them to QARs. To achieve this, the authors developed a three-step approach. First, a k-d tree is built to store valid cells in the space and their corresponding number of points. Second, a dense region cover set is grown inside some leaf nodes from their boundaries, and self-merging of cover sets is done across boundaries. Finally, the cells are traversed in each cover to find dense regions. The authors evaluated the complexity of DRMiner for different steps and used a synthetic data set with varying numbers of attributes and instances for evaluation. * Density-Based Sub-space Miner (DBSMiner): The DBSMiner algorithm, proposed in 2008 by Guo et al. <cit.>, aims to cluster the high-density subspace of quantitative attributes. CBSD (Clustering Based on Sorted Dense Units), a new clustering algorithm, was used to sort all subspaces with densities greater than a certain threshold in descending order. Interestingly, DBSMiner has a unique property when dealing with low-density subspaces: it only needs to verify the neighbouring cell instead of scanning the entire space. The algorithm is capable of uncovering interesting association rules. * Mining Quantitative Association Rule (MQAR): Yang et al. <cit.> proposed the MQAR algorithm in 2010, which utilizes dense regions to generate numerical association rules. The algorithm clusters dense subspaces using the DGFP tree (dense grid frequent pattern tree) in four main steps. Firstly, the data space is partitioned into non-overlapping rectangular units by partitioning each quantitative attribute into intervals. Then, a DGFP tree is created to store dense cells in the space with a density greater than the minimal density criterion by mapping all database transactions into a high-dimensional space S and sorting units by density. The third step is to mine the DGFP tree to obtain dense subspaces, which provide information about database transactions. Finally, the dense subspaces in S are identified based on the dense subspaces, and associated cells are found to build clusters. Association rules that are not redundant are then constructed using the clustering result. * Quantitative Association Rule Mining Method with Clustering Partition (QARC_Apriori): The QARC_Apriori algorithm, proposed in 2014, aimed to analyze correlations in satellite telemetry data <cit.>. The algorithm involved three main steps: First, it performed dimensionality reduction to eliminate redundant attributes. Second, it discretized numeric attributes using the K-means clustering algorithm. Finally, it used the apriori algorithm for mining QARs, with frequent itemset mining and rule generation. Since satellite telemetry data has a vast amount of data, numerical attributes, and high dimensions with various attributes such as voltage, current, pressure, and temperature, the authors used the grey relational analysis method to reduce the dimensionality. * Graph Clustering and Quantitative Association Rules (GCQAR): Medjadba et al. <cit.> proposed GCQAR, a method for discovering significant patterns in geochemical data by combining graph clustering and QARM. Identifying hidden patterns related to mineralization in geochemical data is a challenging task. The proposed method tackles this by first applying graph clustering to partition the input data into highly cohesive, sparsely connected subgraphs. This step helps to separate the relevant geochemical data from the complex background. Then, QARs are used to measure the interrelation between pairs of vertices in each subgraph. For each cluster, a set of QARs is generated by randomly selecting antecedent and consequent rules and evaluating them based on support and confidence. Fuzzy Based Algorithms * Fuzzy-Automatic Pattern Analysis and Classification System (F-APACS): Chan extended the APACS2 algorithm for QARM by proposing the F-APACS algorithm <cit.>, which is based on fuzzy set theory and is designed for mining association rules with numeric attributes. Instead of finding intervals for quantitative attributes as done in other methods, F-APACS uses linguistic terms to represent discovered patterns and exceptions. Similar to APACS2, F-APACS also employs the adjusted difference analysis technique, which eliminates the need for a user-supplied threshold and can discover both positive and negative association rules. To capture the uncertainty associated with the fuzzy association rules, F-APACS uses a weight of evidence measure to represent confidence. * Kuok's Approach: Kuok et al. <cit.>, proposed the method for mining fuzzy association rules of the form, “If X is A then Y is B.” Here X, Y are attributes and A, B are fuzzy sets. This approach is important because it provides a better way of handling numeric attributes compared to existing methods. The study showed that the use of fuzzy sets helps to understand the correlation between two attributes through the significance factor and certainty factor. * Fuzzy Transaction Data mining Algorithm (FTDA): Hong et al. <cit.> used the fuzzy concept with the apriori algorithm to discover fuzzy association rules from a quantitative data set. To overcome the limitation of the apriori algorithm in handling quantitative data, the authors introduced the FTDA (Fuzzy Transaction Data mining Algorithm), which first transformed quantitative data into linguistic terms using membership functions. Next, the scalar cardinalities of all linguistic terms were calculated, and the apriori algorithm was modified to find association rules as fuzzy sets. However, a drawback of this method is that experts need to provide the best fuzzy sets of quantitative attributes manually. * Gyenesei's Approach: Gyenesei <cit.> addressed the limitation of expert dependency in selecting fuzzy sets for quantitative attributes by introducing a fuzzy normalization process. To obtain unbiased membership functions, the author proposed using fuzzy covariance and fuzzy correlation values. Interest measures were defined in terms of fuzzy support, fuzzy confidence, and fuzzy correlation. The approach was evaluated using two methods: with normalization and without normalization. The non-normalized method produced the most interesting rules, while the number of rules generated by the normalized approach was comparable to the discrete method. The fuzzy normalization process helped to reduce anomalies that may arise from the arbitrary selection of fuzzy sets. * Generalized Fuzzy Quantitative Association Rule Mining Algorithm: Lee <cit.> proposed a novel algorithm for generalized fuzzy QARM, incorporating fuzzy concept hierarchies for categorical attributes and fuzzy generalization hierarchies of linguistic terms for quantitative attributes. Unlike other methods, this approach calculates the weighted support and weighted confidence by taking into account the importance weights of attributes. To eliminate redundant rules, the R-interest measure is used. The algorithm converts each transaction into an augmented transaction and applies apriori <cit.> to generate frequent itemsets with the aid of weighted support and weighted confidence measures. It then extracts QARs by removing rules not meeting the R-interest measure's criteria. * Optimized Fuzzy Association Rule Mining(OFARM): Zheng et al. <cit.> proposed a novel algorithm, OFARM (optimized fuzzy association rule mining), in 2014 to optimize the partition points of fuzzy sets with multiple objective functions. The frequent itemsets are generated using a two-level iteration process, and the certainty factor with confidence is used to evaluate fuzzy association rules. Hybrid Based Algorithms * Equal-Depth Partition with Fuzzy Terms (EDPFT): Zhang <cit.> proposed an enhanced version of the equi-depth partition (EDP) algorithm that integrated fuzzy terms, called EDPFT. This algorithm was designed to identify association rules that contain intervals, crisp values, and fuzzy terms on both the left-hand and the right-hand sides. Unlike FTDA, which relies on user-supplied fuzzy sets, EDPFT utilizes equi-depth partitioning to obtain the intervals of numeric attributes. Although the author did not evaluate the algorithm using any data set, this approach shows potential in dealing with both crisp and fuzzy values in ARM. * Mohamadlou et al. Algorithm: Mohamadlou et al. <cit.> introduced a fuzzy clustering-based algorithm for mining fuzzy association rules. The algorithm utilizes C-means clustering to cluster all the transactions, followed by obtaining the fuzzy partition for each attribute. It then converts the quantitative transactions into `fuzzy discrete transactions' by mapping the quantitative data into fuzzy partitions. The algorithm mines fuzzy association rules from the `fuzzy discrete transactions' using an ARM algorithm. * Fuzzy Inference Based on Quantitative Association Rule (FI-QAR): Wang et al. <cit.> proposed a three-phase algorithm called FI-QAR, which integrates clustering and fuzzy techniques. In the first phase, the density-based fuzzy adaptive clustering (DFAC) <cit.> algorithm was applied to discretize numeric attributes into discrete intervals. The intervals were then combined with the TS fuzzy model to generate a nominal vector matrix, which was used to modify the A apriori algorithm and reduce the scanning overhead of a large database. The second phase involved mining QARs using an improved apriori algorithm. Finally, the third phase pruned the association rules. The proposed approach offers a way to mine QARs from large databases effectively. * Fuzzy Class Association Rule Support Vector Machine (FCARSVM:) Kianmehr et al. <cit.> proposed the FCARSVM to obtain fuzzy class association rules. The authors extracted Fuzzy Class Association Rules (FCAR) using a fuzzy C-means clustering algorithm in the first phase, and FCARs were weighted based on the scoring metric strategy in the second phase. §.§.§ The Optimization Method Evolution and DE-Based Algorithms * GENetic Association Rules (GENAR): Mata et al. <cit.> introduced GENAR as a genetic algorithm-based solution for NARM. GENAR is designed to identify numerical association rules with an unknown number of numeric attributes in the antecedent and a single attribute in the consequent. By utilizing genetic algorithms, GENAR offers an effective approach to discovering association rules involving numerical attributes. * Genetic Association Rules (GAR): The extended version of GENAR, called GAR, was proposed by Mata et al. <cit.>. GAR utilizes the five fundamental phases of a genetic algorithm, namely initialization, evaluation, reproduction, crossover, and mutation, to discover intervals for numerical attributes. A key contribution of GAR is the introduction of a fitness function to determine the optimal amplitude for each numerical attribute's interval. The genes in GAR represent the upper and lower limits of the attribute intervals and are initially created randomly. Through crossover and mutation operations, a new generation of genes is generated, and the fitness function is used to evaluate the quality of the intervals. GAR provides an effective approach for identifying appropriate intervals for numerical attributes in ARM. * Genetic Association Rules Plus (GAR Plus): Alvarez et al. <cit.> made enhancements to the GAR algorithm and introduced GAR Plus. This improved version enables the automatic extraction of intervals for numerical attributes through an evolutionary process, eliminating the need for pre-discretization. GAR Plus enhances the fitness function of GAR by incorporating additional parameters such as support, confidence, interval amplitude, and the number of attributes with a modifier. By considering these parameters, GAR Plus provides a more comprehensive evaluation of the fitness of candidate intervals, resulting in improved performance and accuracy compared to the original GAR algorithm. * Alatas and Akin Algorithm: Alatas and Akin <cit.> have made significant contributions to the field of NARM. In one of their studies, Alatas extended the GAR algorithm to discover both positive and negative association rules. They compared the performance of their proposed algorithm with the original GAR algorithm and observed that the amplitude of the intervals generated by their approach was lower than that of GAR. This indicates that the extended algorithm by Alatas and Akin was able to identify more specific and precise intervals for numerical attributes, resulting in improved rule discovery. * QuantMiner: QuantMiner <cit.> is a system for discovering QARs that employs a genetic algorithm. The system operates with a predefined set of rule templates, which can be either user-selected or computed by the system itself. These templates define the format of the QARs. By utilizing the genetic algorithm, QuantMiner searches for the optimal intervals for the numerical attributes specified in the rule templates. This approach allows the system to efficiently explore the search space and identify association rules that meet the desired criteria. * Expending Association Rule Mining with Genetic Algorithm (EARMGA): Yan et al. <cit.> presented an encoding method for discovering association rules using a genetic algorithm. Their approach, named ARMGA, initially designed for boolean attributes, was extended to handle generalized association rules incorporating both categorical and quantitative attributes. The authors introduced a fitness function based on relative confidence, eliminating the need for a user-defined minimum support threshold. To handle quantitative attributes, they discretized them into intervals and integrated four genetic operators into the algorithm. The resulting enhanced version, EARMGA, successfully accommodated quantitative attributes and utilized the k-FP tree data structure for efficient rule mining. * Real-Coded Genetic Algorithm (RCGA): Martinez et al. <cit.> introduced a real-coded genetic algorithm (RCGA) for NARM. The RCGA is a variation of the binary-coded CHC algorithm <cit.>, known for its elitist selection mechanism that favors the best individual for the next generation. In the context of NARM, the RCGA is utilized to search for optimal intervals. By employing real-coded representations and incorporating the elitist selection feature, the RCGA aims to efficiently explore the search space and discover high-quality numerical association rules. * Quantitative Association Rules by Genetic Algorithm (QARGA): Martínez et al. <cit.> improved the RCGA by proposing QARGA to extract QARs from real-world multidimensional time series. The QARGA method discovered significant relationships between ozone concentrations in the atmosphere and other climatological time series, including temperature, humidity, wind direction, and speed. * Niching Genetic Algorithm for Quantitative Association Rules (NICGAR): NICGAR was proposed by Martin et al. <cit.> to prevent the generation of similar rules by reducing the set of quantitative rules, which includes positive and negative rules. The algorithm consists of three components: an external population, a punishment mechanism, and a restarting process to manage niches and avoid the same solutions. The article also proposes a new similarity measure to find the similarity between rules. * QAR-CIP-NSGA-II: Martin et al. <cit.> presented a novel multi-objective evolutionary algorithm called QAR-CIP-NSGA-II, which extends NSGA-II to simultaneously learn the intervals of attributes and conditions for each rule in a QAR system. QAR-CIP-NSGA-II aims to discover a set of high-quality QARs that balance interpretability and accuracy by maximizing comprehensibility, interestingness, and performance objectives. The algorithm incorporates an external population and a restarting method to enhance population diversity and store discovered nondominated rules. The comprehensibility of a rule is measured by the number of attributes involved in the rule, while the product of the certainty factor and support determines the accuracy. The interestingness measure, lift, is used to determine how significant the rule is. * Multi-Objective Genetic algorithm Association Rule mining (MOGAR): Minaei-Bidgoli et al. <cit.> proposed MOGAR algorithm for discovering association rules from numerical data. The algorithm maintains a population of candidate association rules, representing potential solutions, and applies genetic operators such as selection, crossover, and mutation to evolve the population over successive generations. The fitness of each candidate rule is evaluated based on multiple objectives, such as confidence, interestingness and comprehensibility. MOGAR employs a Pareto dominance concept to identify non-dominated solutions MOGAR has the ability to handle complex datasets with multiple conflicting objectives, providing a more comprehensive view of associations in the data. * Multi-Objective Positive Negative Association Rule Mining Algorithm (MOPNAR): Martin et al. <cit.> proposed MOPNAR, a multi-objective algorithm that aims to achieve the same objectives as QAR-CIP-NSGA-II, including mining a reduced set of positive and negative QARs. The authors also claimed to achieve a low computational cost and good scalability, even with an increased problem size. In addition, MOPNAR was compared with other existing evolutionary algorithms such as GAR, EARMGA, GENAR, and MODENAR. * Multi-Objective Quantitative Association Rule Mining (MOQAR): Martínez et al. <cit.> improved the multi-objective evolutionary algorithm (MOEA) non-dominated sorting genetic algorithm-II (NSGA-II) <cit.> by integrating it with their proposed QARGA approach. The authors used principal component analysis (PCA) to select the best subset of quality measures for the fitness function. Additionally, different distance criteria were introduced to replace the crowding distance of solutions to obtain secondary rankings in Pareto fronts. The primary ranking was achieved through the non-dominated sorting of the solutions. * Multi-Objective Evolutionary Algorithm for Quantitative Association Rule Mining (MOEA-QAR): The MOEA-QAR algorithm <cit.> combines a genetic algorithm with clustering to mine interesting association rules. The dataset is first clustered using K-means, and each cluster is used as input to a separate GA to extract rules for that cluster. The fitness function of each chromosome is defined by confidence, interestingness, and cosine2. The algorithm can be applied to the entire dataset or just to each cluster, and experiments show that more rules are retrieved per cluster than for the whole dataset. Notably, users do not need to specify minimum support or confidence thresholds. * Association Rule Mining with Differential Evolution (ARM-DE): In 2018, Fister et al. <cit.> proposed a novel approach to ARM with numerical and categorical attributes based on differential evolution. Their algorithm consists of three stages: domain analysis, solution representation, and fitness function definition. In domain analysis, attribute domains are determined for numerical and categorical attributes. For numerical attributes, the minimum and maximum bounds are defined, while for categorical attributes, a set of values is enumerated. Each solution is represented mathematically using a real-valued vector. The fitness function is then calculated based on confidence and support, and optimization is achieved by maximizing the fitness function value. * Rare-PEAR: The Rare-PEARs algorithm proposed by Almasi et al. <cit.> aims to discover various interesting and rare association rules by giving a chance to each rule with a different length and appearance. The algorithm decomposes the process of ARM into N-1 sub-problems, where each sub-problem is handled by an independent sub-process during Rare-PEARs execution. N is the number of attributes, and each sub-process starts with a different initial population and explores the search space of its corresponding sub-problem to find rules with semi-optimal intervals for each attribute. This approach allows for a more comprehensive exploration of the search space, discovering more diverse and rare association rules. * Genetic Network Programming (GNP): Taboada et al. <cit.> proposed Genetic Network Programming (GNP) as a graph-based approach to ARM with numerical attributes. GNP consists of three node types: a start node, a judgement node, and a processing node. The judgement nodes act as conditional branch decision functions, while the processing nodes act as action functions. Evolution is carried out using crossover and mutation operators, and the significance of important rules is measured using the chi-square test. Rules are stored in a pool, which is updated every generation, and the lower chi-squared value rule is exchanged with a higher chi-squared value rule. This approach effectively extracts important rules from the database. * Grammar-Guided Genetic Programming Association Rule Mining (G3PARM): Luna et al. <cit.> applied Grammar-Guided Genetic Programming (G3P) to the task of finding QARs, building on their previous work in 2010, where they introduced G3PARM for ARM. The focus of the approach proposed in <cit.> was to reduce gaps in numerical intervals and emphasize the distribution of instances. To achieve this, the authors developed a self-adaptive algorithm that dynamically adjusts the number of parameters used in the evolutionary process and utilizes context-free grammar to represent solutions. The algorithm aims to identify the best rules according to a given fitness function, which are then stored in a pool and updated in each generation. * Multi-Objective Differential Evolution algorithm for Numeric Association Rules (MODENAR): Alatas et al. <cit.> proposed a multi-objective differential evolution algorithm to discover accurate association rules from numeric attributes. The algorithm was designed to optimize four objectives: amplitude, comprehensibility, support, and confidence, based on Pareto principles. The support and confidence of the discovered rules were required to be high. Comprehensibility was defined as the number of attributes involved in a rule, and shorter rules were preferred. The amplitudes of attribute intervals were aimed to satisfy fewer rules; hence amplitude was minimized while support, confidence, and comprehensibility were maximized. Swarm Intelligence-Based Algorithms * Rough Particle Swarm Optimization Algorithm (RPSOA): The RPSO algorithm was introduced as the first PSO-based algorithm for NARM with rough particles <cit.>. This algorithm aims to determine numeric attribute intervals and then discover association rules that conform to these intervals, where the fitness function is responsible for determining the amplitude of the intervals. Rough values of each attribute are defined by upper and lower bounds and are useful in representing an interval for an attribute. Each rough particle has decision variables representing items and intervals. It consists of three parts: the first describes the antecedent or consequent of a rule, the second represents the lower bound, and the third represents the upper bound of the interval. An item is considered an antecedent if its value is between 0 and 0.33, a consequent if it is between 0.33 and 0.66, and if it is between 0.66 and 1.0, the item would not be included in the rule. Once the RPSO algorithm completes its execution, attribute bounds refinement is performed for the covered rule. This refinement step aims to improve the quality and accuracy of the discovered association rules by further optimizing the attribute bounds. * Chaotically ENcoded Particle Swarm Optimization Algorithm (CENPSOA): The CENPSOA algorithm, proposed by Alatas and Akin <cit.>, introduced the use of chaos variables and particles in PSO for the first time. Unlike previous PSO-based methods, CENPSOA employs chaotic numbers to encode particle information. Specifically, each chaotic number mid_rad represents an interval with a lower bound of mid-rad and an upper bound of mid+rad. In CENPSOA, a particle is represented as a string of chaotic parameters consisting of a midpoint and radius pair. Each decision variable consists of three parts: the first part represents the antecedent or consequent, the second part describes the midpoint and the third part represents the radius. This algorithm works similarly to RPSOA but is different only with the encoding of particles. * Parallel PSO for Quantitative Association Rule Mining (PPQAR): Yan et al.<cit.> parallelized the PSO algorithm for ARM to increase its scalability and efficiency in dealing with large datasets in real-world applications. To evaluate each particle's quality, the suggested technique used four optimization objectives: support, confidence, comprehensibility, and interest. The parallel PSO method employs two techniques to handle distinct application scenarios: particle-oriented and data-oriented. The particle-oriented technique is well-suited for small datasets with a large number of particles, treating each particle as a separate computing unit and computing the fitness function in parallel. On the other hand, the data-oriented approach is suitable for large datasets, dividing the entire dataset into partitions and treating each partition as a computing unit. Unlike the particle-oriented method, the data-oriented method updates particle locations, velocities, and local best sets in parallel. Both methods were compared with the benchmark serial algorithm. * Multi-Objective Particle swarm optimization algorithm for Association Rules mining (MOPAR): The MOPAR algorithm, proposed by Beiranvand et al. <cit.>, is a multi-objective particle swarm optimization (MOPSO) technique based on Pareto optimality. It aims to extract numerical association rules in a single step using three objectives: confidence, comprehensibility, and interestingness. Like RPSOA, the particle in MOPAR is represented by lower and upper bounds of intervals for each attribute. To address the problem of numerical ARM, MOPAR provides a redefinition of lbest and gbest particles and a selection procedure. The algorithm was compared with other multi-objective ARM algorithms, including MODENAR, MOGAR, RPSOA, and GAR. * PSO with the Cauchy Distribution (PARCD): A method proposed by Tahyudin et al. <cit.> extends the MOPAR algorithm by combining PSO with the Cauchy distribution. In traditional PSO, the velocity of a particle approaches 0 after many iterations, leading to premature searching and suboptimal results. The proposed approach addresses this issue by integrating the Cauchy distribution in the velocity equation, allowing particles to continue exploring the search space. This method uses multiple objectives, including support, confidence, comprehensibility, interestingness, and amplitude functions, to extract numerical association rules in a single step. To evaluate the method's performance, it was compared with MOPAR, MODENAR, MOGAR, and RPSOA on various datasets, and it was found that the proposed method, called PARCD, outperformed MOPAR. * Wolf Search Algorithm (WSA): Agbehadji <cit.> introduced a wolf search algorithm for NARM inspired by the hunting behaviour of wolves. The algorithm is based on three stages of wolf-preying behaviour: actively seeking prey, passively seeking prey, and escaping from predators. The algorithm generates association rules if the wolf is actively seeking prey, and no rules are generated if the wolf is passively seeking prey or escaping. The fitness function includes support, confidence, the number of attributes, and the penalization of interval frequency. The algorithm represents rules using the wolf's best position and fitness value, and each wolf's position contains decision variables for items and intervals. While this study introduces the algorithm, it has not been evaluated on datasets, and the algorithm's accuracy and efficiency will be determined in future work. * Multi-objective Particle Swarm Optimization (MOPSO): The MOPSO algorithm, originally proposed by Coello <cit.> in 2004, utilizes Pareto dominance and an archive controller. In 2019, Kuo et al. <cit.> developed a MOPSO algorithm for NARM consisting of three stages: initialization, adaptive archive grid, and PSO searching. Particle representation and initialization are the same as in the RPSOA algorithm. The adaptive archive grid is a hypercube-shaped space designed to obtain non-dominated solutions by comparing all particle solutions using Pareto optimality. It contains two components: the archive controller and the grid. The external archive retains non-dominated solutions, and new solutions are added if existing ones do not dominate them or if the external archive is empty, the new solution is saved in the external archive; otherwise, it is discarded. The adaptive grid approach is used when the external population reaches its maximum capacity. The objective function space is partitioned into regions. The grid is recalculated if the external population's individual falls outside the grid's bounds, and each individual within it must be relocated. After the archive grid stage, PSO searching occurs. The algorithm also utilizes three objectives: confidence, comprehensibility, and interestingness to generate rules. * Ant Colony Optimization for Continous attributes (ACO_R): The ACO_R algorithm, introduced by Moslehi and Eftekhari <cit.>, is an ant colony optimization technique designed to discover association rules for numeric attributes without relying on minimum support and confidence thresholds. Unlike the ACO algorithm, which uses a discrete probability distribution, ACO_R employs a probability density function. It employs a solution archive size of k to describe the pheromone distribution over the search space, instead of a pheromone table. The algorithm works by having the ants move across the archive, selecting a row based on its associated weight (ω). Then a new solution is created by sampling the Gaussian function g for each dimension's values in the selected solution. Each numeric attribute corresponds to one dimension of the solution archive, which is divided into three sections that make up a numeric association rule: the first part represents the rule's antecedent or consequence; the second part represents its value; and the third part represents its standard deviation, which is used to form numeric attribute intervals. The algorithm uses Gaussian functions to determine the attribute intervals that correspond to interesting rules, with the function controlling the intervals' frequency and length. The objective function has four components. The first section, which can be viewed as the rule's support, measures the importance of the association rule. The second section is the confidence value of the rule. The third section is the number of attributes, while the last section penalizes the amplitude of the intervals that comply with the itemset and rules. The pheromone update technique introduces a set of new solutions, each generated by one ant, and eliminates the same number of bad solutions from the archive after ranking them to track the solutions. This ensures that the top-ranked solutions are always at the archive's top, and that the best solution in each execution of ACO_R is a rule. * Multi-Objective Cuckoo search Algorithm for Numerical Association Rule Mining (MOCANAR): MOCANAR <cit.> is a multi-objective cuckoo search algorithm that uses Pareto principles to derive high-quality association rules from numeric attributes. The algorithm mimics the brooding parasitic behavior of cuckoo species and represents ARM using a 2D array. The columns of the array represent the attributes in the dataset, and the first row among three rows represents the attribute's location. The second row consists of the lower bound of the attribute, and the third row represents the upper bound of the attribute. A value of 0 in the first row indicates that the related attribute is not present in the rule, 1 shows that the attribute belongs to the antecedent part of the rule, and 2 shows that the attribute belongs to the consequent part of the rule. MOCANAR considers four objectives: support, confidence, interest, and comprehensibility. The algorithm was evaluated on three datasets and produced a small number of high-quality rules incrementally for each iteration of the method. * Multi-Objective Bat Algorithm for Numerical Association Rule Mining (MOB-ARM): Heraguemi et al. <cit.> proposed a multi-objective bat algorithm for NARM inspired by microbats' behaviour. The algorithm uses four quality measures, namely support, confidence, comprehensibility, and interestingness, and two global objective functions to extract interesting rules. The first objective function combines support and confidence, while the second objective function considers comprehensibility and interestingness. The algorithm comprises three main steps: initialization, searching for the non-dominance solution for the Pareto point, and searching for the best solution for each bat at the Pareto point. The rule is encoded using the Michigan approach. The bats are initialized with random frequency and velocity, and the proposed algorithm is also compared with other algorithms, including MODENAR, MOGAR, and MOPAR. * Discrete Crow Search Algorithm for Quantitative Association Rule Mining (DCSA-QAR): In 2021, a new algorithm called DCSA-QAR was proposed for mining numerical association rules <cit.>. This approach utilizes a novel discretization algorithm called Confidence-based Unsupervised Discretization Algorithm (CUDA) that employs the confidence measure to discretize numerical attributes. The CSA is then transformed from continuous to discrete using crow position encoding, and new operators are used to ensure that any position update within the search space is valid. Each crow in the flock is represented by its current position and memory positions, with each particle composed of two vectors for control and parametric attributes. The control attributes can have one of three values: 0 indicates that the attribute is not part of the rule, 1 indicates that it belongs to the antecedent, and -1 indicates that it is part of the consequent. The fitness function is optimized by maximizing the measures of support, confidence, and gain of the rules. DCSA-QAR was compared with several mono and multi-objective algorithms, including NICGAR, MOPNAR, MODENAR, and MOEA-Ghosh. Physics Based Algorithms * Gravitational Search Algorithm for NARM (GSA-NARM): GSA is a physics-inspired metaheuristic that leverages Newton's law of gravity. In the context of NARM, the GSA algorithm, as described by Can et al. <cit.>, aims to discover attribute intervals simultaneously without needing a minimum support or confidence threshold. In GSA-NARM, agents are treated as objects, and their positions represent potential solutions. The objective function determines the amplitude of the intervals being explored. The algorithm identifies the position of the agent with the heaviest mass as the global solution, analogous to the gravitational force exerted by massive objects in Newton's law. During the optimization process, the fitness function is evaluated for each agent, and the gravitational constant, denoted as G, is updated based on the performance of the best and worst agents in the population. The mass M of each agent is computed, and the velocity and position are updated accordingly, mimicking the motion of celestial objects influenced by gravitational forces. The GSA-NARM algorithm continues iterating until the stopping criteria are met, such as reaching a maximum number of iterations or achieving a desired fitness value. The algorithm then returns the association rule with the best fitness value obtained during the optimization process. GSA-NARM has demonstrated promising results when compared to other state-of-the-art methods for NARM, showcasing its effectiveness in tackling the NARM problem. Algorithm for Hybrid Based * Hybrid Genetic PSO-Quantitative Association Rule Mining (HGP-QAR): Moleshi et al. <cit.> introduced a hybrid approach called HGP-QAR, which combines the strengths of multi-objective GA and multi-objective PSO methods. By leveraging the advantages of both techniques, HGP-QAR aims to improve the efficiency of NARM. The hybridization of GA and PSO allows for the exploration of the search space from different perspectives. In HGP-QAR, individuals are represented as chromosomes for GA and particles for PSO. The individuals are sorted based on a fitness function that considers three metrics: confidence, interestingness, and comprehensibility. During the optimization process, the upper half of individuals follow the stages of GA, including selection, crossover, and mutation, while the lower half follows the stages of PSO, updating their velocity and positions based on the personal best (pbest) and global best (gbest) positions. This combination of GA and PSO allows for a more efficient search and exploration of the solution space. The outcomes obtained from GA and PSO are then combined to generate the next generation and form new rules. This process is repeated until the termination criteria are met, such as reaching a maximum number of iterations or achieving satisfactory results. Various experimental results show that the hybrid GA-PSO approach, HGP-QAR, outperforms other algorithms like MOPAR and PARCD in terms of efficiency, demonstrating its effectiveness in NARM. * Multi-objective Hybrid Differential Evolution Sine Cosine Numerical Association Rule Mining Algorithm (MOHDESCNAR): DE has been known to suffer from premature convergence and stagnation issues in multi-modal search spaces. A recent approach called MOHDESCNAR <cit.> has been proposed to overcome these problems. This algorithm reduces the number of numerical association rules by adjusting the intervals of related numeric attributes. It employs hybrid sine and cosine operators with DE, which can overcome stagnation issues. The proposed algorithm balances exploration and exploitation by using global DE exploration and local SCA exploitation during iterations to prevent premature convergence and stagnation problems. This study used three methods: using only the sine operator (MOHDESNAR), only the cosine operator(MOHDECNAR), and both the sine and cosine operators (MOHDESCNAR). * Quantitative Association Rule miner with Chaotically Encoded Hybrid Differential Evolution and Sine Cosine Algorithm (QARCEHDESCA): Altay and Alatas proposed the MOHDESCNAR algorithm in 2021, which used a combination of DE and the sine and cosine algorithms. In 2022, the authors introduced a new hybrid algorithm called QARCEHDESCA <cit.>, which employs chaos number-based encoding and HDESCA (Hybrid differential evolution sine cosine algorithm). The QARCEHDESCA algorithm dynamically discovers the ranges of quantitative attributes and association rules. It randomly initializes candidate search agents to find quantitative associations. The initial set of search agents is removed from all-dominating search agents. The remaining nondominated search agents are sent to SCA-based new operators and DE crossover. The nearest neighbour distance function is used to remove rules close to each other when the count of nondominated rules exceeds the defined threshold. For QARCEHDESCA, the best search agent and one random agent are chosen for sine and cosine operators. After that, DE's crossover operator is applied to nondominated search agents. If the trial agents dominate the target search agent, it is added to the population; otherwise, the search agent with the highest weighted sum fitness is chosen for subsequent iterations. When the maximum number of iterations is reached, QARCEHDESCA returns nondominated QARs. The fitness function of the algorithm aims to maximize support, confidence, and comprehensibility while minimizing attribute amplitudes. Each search agent represents a numerical association rule with two components: inclusion/exclusion and a chaotic number representing the center point and radius. QARCEHDESCA is compared with RPSOA and other intelligent optimized algorithms. §.§.§ The Statistical Method * Aumann and Lindell's Work: Aumann and Lindell <cit.> introduced a new definition of QARs based on the distribution of values of quantitative attributes and presented an algorithm to mine them. To consider the distribution of continuous data, they used conventional statistical measures. * Webb's Work: Aumann and Lindell's approach has the disadvantage of being impractical for generating frequent itemsets in dense data. To address this limitation, Webb proposed an efficient admissible unordered search algorithm for discovering impact rules, which capture meaningful interactions between data selectors and numeric variables in dense data <cit.>. Impact rules were introduced as a new name for QARs. The proposed OPUS_IR algorithm uses the OPUS framework and does not need to retain all frequent itemsets in memory during frequent itemset generation, unlike Aumann and Lindell's method <cit.>. It also does not require a minimum cover to be specified for the search. The OPUS_IR algorithm was compared with the frequent itemset approach in terms of performance. * Kang et al. Work: The authors of the study <cit.> introduced a new approach to bipartition quantitative attributes called standard deviation minimization. This technique minimizes the standard deviation of two partitions obtained by dividing the attribute into two parts, and it outperforms existing bipartition techniques. The authors also redefined the mean-based and median-based bipartition techniques, and their experimental results confirmed the effectiveness of the proposed framework. §.§.§ Miscellaneous Other Methods * Mutual Information and Clique (MIC) Framework: Yiping et al. <cit.> proposed a novel approach for mining QARs using an information-theoretic framework called MIC. This framework avoids the generation of excessive itemsets by investigating the relationship between attributes. The approach comprises three phases: 1) discretization, which partitions numeric attributes into intervals; 2) MI graph construction, which computes the normalized mutual information of attributes and represents their strong relationships using a MI (mutual information) graph; and 3) clique computation and QAR generation, which computes frequent itemsets using cliques and generates QARs. The experiments demonstrate the effectiveness of the MIC framework in reducing the number of generated itemsets and improving the efficiency of QAR mining. * Generalized One-sided Quantitative Association Rule mining (GOQR) and Non-redundant Generalized One-sided Quantitative Association Rule mining (NGOQR): Zhiyanget al.<cit.> proposed a cognitive computing-based method for NARM, consisting of two algorithms: GOQR and NGOQR. These algorithms consider the order relation of attribute values when mining rules. The first phase of the GOQR algorithm generates frequent itemsets, while the second phase extracts generalized one-sided QARs. To enhance efficiency, the rules are reduced using a generalized one-sided concept lattice. For non-redundant rule extraction, the NGOQR algorithm first executes the minimal generator of a target itemset algorithm and then continues with rule mining. * Quantitative Miner with the VMO algorithm (QM_VMO): The Quantitative Miner with the VMO algorithm (QM_VMO) <cit.> utilizes the Variable Mesh Optimization algorithm <cit.>, a population-based meta-heuristic. The algorithm represents the population P as a mesh of n nodes P = n_1, n_2,..., n_n, where each node corresponds to a possible solution and consists of an m-dimensional vector n_i = (v^i_1, v^i_2,..., v^i_m). The algorithm primarily operates through expansion and contraction processes. QM_VMO is executed in three stages: (i) defining a rule template, (ii) generating the rule population, and (iii) optimizing the numerical attributes of the rule by optimizing intervals. Compared with QuantMiner, QM_VMO is found to be less sensitive to changes in the dataset. §.§ RQ3 What are the advantages and limitations of the existing NARM methods? There are strengths and limitations associated with each method for NARM. These advantages and limitations of each approach are summarized in Tables <ref>, <ref>, and <ref>. Discretization methods are advantageous in terms of simplicity, interpretability, and flexibility. They allow for the handling of both categorical and numerical attributes, and the resulting discrete intervals can be easily understood and applied. However, these methods require the specification of a user-defined threshold, which can be subjective and may affect the quality of the discovered rules. Discretization can also lead to information loss and may not capture the true underlying patterns in the data. Optimization methods excel in their ability to discover relationships and patterns in high-dimensional data without the need for user-defined thresholds or discretization steps. They can handle both categorical and numerical attributes and are often more robust to noise and missing data. However, these methods can suffer from issues such as convergence problems, finding only local optima, high computational complexity, and the need for a large amount of computational resources. Statistical methods are advantageous in their ability to handle missing data and noise. They are also well-suited for analyzing categorical data and can provide statistical significance measures for the discovered rules. However, these methods are typically designed for categorical data and may not be suitable for numerical attributes. They often assume linear relationships and may not capture more complex patterns present in the data. Overall, each approach has its strengths and limitations, and the choice of method depends on the specific requirements and characteristics of the dataset being analyzed. p1.5cm p1.2cm p4.9cm p4.9cm Advantages and Limitations of Discretization Method Approaches Reference Advantages Limitations Partitioning <cit.> Simple and easy to implement. Adjusting minimum support and minimum confidence. <cit.> Discover both +ve and -ve rules. Avoid user-specified threshold. need to adjusted difference analysis Clustering <cit.> The ARCS system scales better than linearly with data size. Algorithm is sensitive to noise. Not used for high dimensional data. <cit.> Scalable for very large databases. Generate only non-overlapping intervals. Only generate the rules where consequent should be categorical attribute <cit.> Reflects all the possible interdependencies between attributes in data sets Requirement of the user-specified threshold. <cit.> Efficiently identify a small set of subspaces for finding dense regions. In each cover searching is limited. Need to specify many thresholds. Performance is poor on more than 10 dimensions. The dimensionality curse problem is unsolved. The algorithm does not perform well for the data set with uniform density. <cit.> Effective and scale up linearly with an increased number of attributes. Minimum threshold is needed. <cit.> There is no need to scan the database many times. Do not generate many candidate units. Histogram H' saves the calculation time of support of each grid. As the number of transactions increases, the run time also increases. Fuzzy <cit.> Prune less interesting rules. <cit.> Accuracy increased as the number of transactions increased. Membership functions should be known in advance. <cit.> Optimized the fuzzy sets. The frequent itemsets are created through a two-level iteration process. Flexible membership function. <cit.> Provide better clustering than other methods. p1.5cm p1.3cm p5.5cm p4.2cm Advantages and Limitations of Optimization Method Approaches Reference Advantages Limitations Evolution and DE <cit.> Find association rules from numeric dataset without discretization <cit.> Find the amplitude of the intervals by fitness function. Only frequent itemsets are generated. <cit.> Find association rules from numeric and categorical without discretization <cit.> discover both positive and negative rules. <cit.> High-performance association rule mining, System automation, no need for user-specified minimum support threshold <cit.> Low run time, discover diverse, both positive and negative rules. <cit.> Low computational cost and good scalability <cit.> No need to determine the minimum support and minimum confidence. <cit.> Capable of dealing with numerical and categorical attributes. The algorithm is unable to shrink the lower and upper borders of the numerical attributes. <cit.> association rules are mined without generating frequent itemsets. The algorithm is easy to implement and independent from the requirement of minimum support and minimum confidence threshold. DE suffers from stagnation and premature convergence problem and its local exploitation capability is weak. Swarm-Intelligence <cit.> Efficient and scalable to process huge dataset. PSO trap in local optima. <cit.> Prevent generation of huge useles rules. No requirement for minimum support and minimum confidence threshold. Low support values for association rules. <cit.> Increase the global optimal value of expanded search space. <cit.> No need for minimum support and minimum confidence threshold. Variable correlations are not differentiated by ant algorithms. <cit.> Provide better support and confidence. Higher number of extracted rules decreases the interpretability of the results. <cit.> Reduces computation time. Physics-based <cit.> The confidence and support values of the automatically mined rules are very high. No prior requirement for minimum support and confidence threshold. The problem of attribute interactions has been solved. Not very efficient in searching. Hybrid <cit.> Efficient with respect to the mean number of rules, mean confidence, and mean size metrics. Does not provide the higher mean support value. §.§ RQ4 Which objectives are considered by the several existing multi-objective optimization NARM algorithms? Optimization problems are prevalent and important in scientific research. They can be categorized into two types based on the number of objective functions: single-objective and multi-objective optimization problems. In NARM, the most commonly used parameters are support and confidence, making NARM algorithms single-objective optimization methods where a single solution is selected based on the user's requirements. On the other hand, multi-objective optimization problems involve computing multiple objective functions simultaneously, which can conflict with each other. A solution that works well for one function may be ineffective for another. This makes finding a single solution that satisfies all objectives difficult, and instead, a set of Pareto-optimal solutions is obtained that trade-off between the competing objectives. Table <ref> lists the objectives considered in multi-objective optimization NARM studies, and Table <ref> provides the names of algorithms that utilize these objectives. A detailed explanation of all these objectives is given via Eq. <ref>–<ref>. Support: The number of records with both X and Y itemsets determines the rule's support count. |D| is the total number of records in a dataset. Support (X ⇒ Y) = |(X∪ Y)|/|D| Confidence: The confidence metric assesses the quality of a rule by counting the number of times an AR appears in the entire dataset. The following equation <ref> is used to compute the confidence of the rule X ⇒ Y. Furthermore, these parameters do not ensure that significant rules will be generated. Confidence(X ⇒ Y) = |(X∪ Y)|/| X | Interestingness: The interestingness of a rule is a metric for determining how surprising a rule is to users, not just all possible rules. The first component of Eq.(<ref>) relates to the probability of producing the rule based on the antecedent part, the second part relates to the probability of producing rules based on the consequent part, and the last component is the probability of producing rules based on the overall dataset. Interestingness = Support|(X∪ Y)|/Support|X|·Support|(X∪ Y)|/Support|Y|· ( 1-Support(X∪ Y)/Support(X) ) Comprehensibility: The number of attributes included in both the antecedent and consequent parts of the rule is measured by comprehensibility <cit.>. If the generated rules contain more attributes, then the rules will be difficult to comprehend. The rule is more comprehensible if the number of conditions in the antecedent part is less than that in the consequent part. The following expression measures the comprehensibility of an association rule: Comprehensibility = log(1+ |Y|)/log(1+ |X∪ Y|) Where |Y| and |X∪ Y|) represent the number of attributes in the consequent part and both parts. Amplitude: The intervals in each attribute that comply with interesting rules must have smaller amplitudes. If two rules have the same number of rows and attributes, the one with smaller intervals will provide more information. Amplitude is a minimization function; however support, confidence and comprehensibility are maximization functions <cit.>. Amplitude of the Intervals = 1- 1/m∑_i=1^mu_i - l_i/max(A_i) - min(A_i) Performance: The product of support and CF is performance. Performance enables the ability to mine accurate rules with a suitable trade-off between local and general rules. This measure has a range of values between 0 and 1. The user may find a rule with a performance value close to 1 more useful. Accuracy: Accuracy represents the veracity of the rule <cit.>. Accuracy(X⇒ Y) = Support (X⇒ Y) + Support ( X ⇒ Y) Leverage: Leverage is the difference between the frequency with which the antecedent and the consequent are identified together and the frequency with which they would be expected to be observed together, given their individual support <cit.>. It represents the strength of the rule. Leverage(X⇒ Y) = Support(X∪ Y) - Support(X)·Support(Y) Gain: Gain is the difference between the confidence of both the antecedent and consequent part and the support of the consequent part <cit.>. Gain(X⇒ Y) = confidence(X⇒ Y) - Support(Y) Cosine: The cosine measure considers both the pattern's interest and its significance <cit.>. Cosine(X⇒ Y) = Support(X∪ Y)/√(Support(X)·Support(Y)) §.§ RQ5 What are the metrics to evaluate the NARM algorithms? This RQ aims to identify the commonly used evaluation metrics in NARM algorithms. As shown in Figure <ref>, the number of rules is the most commonly used metric, followed by support, confidence, and run time. Only a few algorithms use other metrics, such as Yule'sQ measure (3%), leverage (3%), accuracy (3%), gain (2%), length of rule (2%), and the number of attributes per rule (3%). Interestingly, all methods use the number of rules, run time, support, and confidence as evaluation metrics, while only the optimization method employs all metrics (see Figure <ref>). Some papers on discretization methods use the number of frequent itemsets as a metric <cit.>. The discretization method primarily uses run time metric with different parameters, such as over the number of records or buckets <cit.>, minimum support, and minimum confidence <cit.> over the number of buckets <cit.>, number of sparse points, number of dense regions and number of attributes <cit.>. On the other hand, the statistical method primarily uses run time <cit.> and a number of rules <cit.> as evaluation metrics. However, other methods also use minimum support, and confidence except for run time and number of rules <cit.>. Mean of interest of missing QARs and variance of interest of missing QARs, the maximum interest of missing QARs were also used for performance evaluation in <cit.>. Of the 34 papers (52%) that employ the number of rules as an evaluation metric, 24 papers (36%) belong to the optimization method. However, 24 publications (36%) in all have evaluated the NARM algorithms regarding the execution time, among which 15 (23%) articles are from the discretization method. §.§ RQ6 Which datasets are used for experiments by NARM methods? Different NARM methods may use different datasets for their experiments, depending on the method type and application domain. Table <ref> presents the datasets that were most commonly used by different NARM methods, excluding those that were used in only one or two articles. In total, we considered twenty-two datasets, including both real-world and synthetic ones. Fifteen of these datasets were sourced from the Bilkent University Function Approximation Repository (BUFA) <cit.>, while seven were from the University of California Irvine machine learning repository (UCI) <cit.>. Figure <ref> shows the datasets used by NARM methods. The Quake dataset was used more frequently than any other dataset, followed by Basketball, Bodyfat, Bolts, and Stock Price. Synthetic datasets were also commonly used. Table <ref> lists the datasets that were used specifically for the discretization method, which were mostly different from those used by other methods. In total, we considered seventeen datasets, including both real-world and synthetic ones. Most articles on the discretization method used various real-world datasets. As shown in Figure <ref>, most of these datasets were used in only one article. We also observed that the optimization method articles tended to use datasets from the BUFA repository, while the discretization method articles tended to use datasets from the UCI repository. §.§ RQ7 What are the challenges and potential future perspectives for the area of NARM? To address this research question, a manual identification of the existing research challenges in NARM was conducted. Additionally, the focus was placed on identifying future directions for NARM research. §.§.§ Research Challenges After a comprehensive analysis of various NARM methods in both static and dynamic settings, we have identified several issues that NARM needs to address. * Handling Skewed Data: NARM faces challenges when dealing with skewed data, where the data distribution is uneven. Finding associations between numerical variables in such datasets can be difficult and lead to biased results, as well as a high number of irrelevant rules. Moreover, skewed data can have a negative impact on the accuracy and reliability of the analysis, potentially resulting in biased conclusions. Calculation of support and confidence measures can be particularly affected, leading to inaccurate values and erroneous assessments of rule strength. Furthermore, processing skewed data can also impact the speed and efficiency of the algorithms, as they may need to handle a large number of extraneous rules. * Handling a Large Number of Rules: The main objective of mining numerical association rules is to discover relationships between numerical variables in large datasets. However, this often results in a vast number of association rules, which can make the process computationally expensive, time-consuming, and difficult to sift through to identify the most relevant or interesting rules. To address this challenge, several techniques have been developed to simplify the process and make it more manageable. Some of these techniques include data sampling, the use of efficient algorithms, parallel and distributed computing, dimensionality reduction, and pruning methods. By implementing these techniques, it becomes easier to extract useful association rules from large datasets, reduce the size of the dataset, simplify the mining process, and speed up computations. * Quality of Association Rules: Extracting high-quality rules is also a challenge in NARM due to the potential for redundancy, irrelevance, and conflicts in the rules. The large and complex nature of the datasets used in NARM can lead to a large number of rules, making it difficult to identify the most relevant ones. Additionally, skewed data can impact the reliability of support and confidence measures, further affecting the quality of the rules. The rules generated by NARM algorithms may also be challenging to interpret and understand. To address these issues, data pre-processing, the use of alternative metrics, the selection of appropriate algorithms, and the application of ensemble methods can be helpful in improving the quality of association rules. * Complex Relationship: Numerical data often contain intricate relationships, such as non-linear or multi-dimensional relationships, which can be difficult to represent and analyze using traditional ARM algorithms. This may result in inaccurate or incomplete rules, which can impact the reliability and accuracy of the analysis. To address this challenge, advanced algorithms such as decision trees, artificial neural networks, or support vector machines can be utilized in NARM. Ensemble approaches like gradient boosting or random forests can also be helpful in addressing complex relationships by combining the output of multiple algorithms to produce more accurate results. However, these techniques may increase computational complexity and require more data and computing resources to be effective. * Handling Outliers: Outliers are extreme values that differ significantly from the majority of values in the dataset and can impact the accuracy and reliability of the results of ARM. Outliers may indicate genuine data variances, or they could result from measurement errors or data input issues. Several methods can address this problem, including outlier detection, data cleaning, data transformation, and robust algorithms. These methods can help remove or mitigate the effect of outliers, ensuring that the mining process yields more accurate and reliable results. §.§.§ Future directions * Handling Big Data: Despite conducting a thorough SLR, we were unable to identify any studies that focus on retrieving numerical association rules from big data. However, the rise of big data will undoubtedly have a significant impact on the future of NARM. Developing more efficient algorithms that can handle vast amounts of numerical data will be essential as big data continues to become increasingly common. This will likely lead to the development of new algorithms specifically designed for big data that are optimized for scalability, speed, and accuracy. Additionally, advanced data cleanings and preprocessing techniques, such as outlier detection, imputation of missing values, and feature selection, will become increasingly important to ensure the quality of results. * Explainable AI: Improving the interpretability and explainability of NARM results is a critical research direction. Explainable AI <cit.> can enhance transparency and comprehension, which is essential for non-experts to validate the findings and ensure their alignment with the intended objectives. By revealing the underlying reasoning behind the results, Explainable AI can assist users in making better decisions and recognizing any inherent biases or limitations in the outcomes. Therefore, developing NARM techniques that provide transparent and comprehensible results is a vital area of research. * Hybrid Approach: A promising future direction in NARM is to leverage the strengths of various methods and techniques through hybrid approaches. Some studies, such as <cit.>, have attempted to combine different approaches to improve the results of NARM. Combining NARM with deep learning, integrating rule-based and distance-based approaches, and combining unsupervised and supervised learning are all hybrid approaches that show potential in this field. By integrating these techniques, the limitations of individual methods can be addressed, resulting in a more accurate and thorough analysis of the relationships between variables in numerical data. Hybrid approaches can lead to valuable insights and more reliable results. * Handling Streaming Data: To keep up with the increasing demand for real-time analysis, developing NARM algorithms that can handle streaming data and update association rules in near real-time is crucial. In applications where timely and accurate decisions are critical, streaming data enables real-time analysis of numerical data, which allows organizations to make informed decisions based on up-to-date information. The ability to analyze a larger volume of numerical data in real time will lead to more comprehensive and accurate results. Moreover, streaming data enables dynamic updates to the results of NARM as new data becomes available, providing a more accurate and comprehensive view over time. Therefore, developing NARM algorithms that can handle streaming data is an important future direction. * Incorporating Machine Learning Techniques: The integration of machine learning techniques, such as deep learning, into NARM, has the potential to revolutionize the field. With the ability to automatically detect patterns and relationships in the data, which may not be immediately apparent to human analysts, machine learning algorithms can significantly enhance the accuracy of the results. Moreover, this approach can reduce the time and effort required to identify such patterns in the data. The utilization of machine learning can also expand the scope of NARM applications across various industries, as these algorithms can handle complex data more efficiently, including high-dimensional data or data with non-linear relationships. * Privacy and Security: The importance of privacy and security in NARM is increasing, and it is imperative to protect and use data ethically. However, the existing studies in this SLR did not address these issues. To ensure data protection, sensitive information can be removed or masked using anonymization techniques while preserving the necessary data for ARM. Furthermore, to reduce the risk of unauthorized access, the data can be partitioned into smaller subsets, and access control methods can be developed to control who has access to the data and the association rules generated from it. Incorporating these privacy and security measures will safeguard the data and ensure its ethical use. §.§ RQ8 How to automate discretization of numerical attributes for NARM in a useful (natural) manner? Developing novel methods and techniques for NARM is a continuous area of research, and the discretization method serves as the foundation for NARM <cit.>. However, selecting the best partitions for discretizing complex real-world datasets still lacks a benchmark method. None of the discretization methods that we figured out with the review, see Sects. <ref> in <ref>, explicitly addresses human perception of partitions. Therefore, we proposed a novel discretization technique in our research <cit.>, which utilizes two measures for order-preserving partitioning of numerical factors: the Least Squared Ordinate-Directed Impact Measure (LSQM) and the Least Absolute-Difference Ordinate-Directed Impact Measure (LADM). These proposed measures offer a straightforward method for finding partitions of numerical attributes that reflect best the impact of one independent numerical attribute on a dependent numerical attribute. We thoroughly experimented with these measures and compared the outcomes with human perceptions of partitioning a numerical attribute <cit.>. To develop an automated measure for discretizing numerical attributes, understanding perceptual conception is crucial. To achieve this, we investigated the impact of data points' features on human perception when partitioning numerical attributes <cit.>. These efforts have contributed to the development of more accurate and efficient methods for NARM. § THREATS TO VALIDITY This section outlines potential threats to the validity of this SLR that might bias the outcomes of our in-depth investigation. The first threat pertains to defining the search string. We make no claims regarding the perfection of the search string used in the process. While we included all relevant search terms related to NARM, it is possible that the search terms may not have captured all relevant NARM-related work. To mitigate this risk, we included synonyms for “numerical association rule mining” and abbreviations such as “NARM” and “QARM” in the search term. The second threat pertains to the selection of digital libraries to search for articles. Although we searched five digital libraries for computer science, it is possible that additional sources may have produced different outcomes. To minimize bias, we manually searched Google Scholar and also looked through the list of references for the selected primary studies to identify significant publications. We are confident that the majority of published research on NARM is covered in this study. The third threat is the inclusion and exclusion of articles. To determine whether a paper should be included or excluded, we first reviewed the title, abstract, and keywords according to the inclusion and exclusion criteria. Then, we manually checked for references to ensure we did not miss any relevant papers. Additionally, we evaluated the selected studies using a quality assessment procedure. The fourth threat is about the time frame. Since the search process was conducted in early 2022, only articles published between 1996 and the beginning of 2022 were included. It is possible that we may have missed some articles published after the specified time frame. The final threat concerns article selection. The authors of this article may have been biased in their choice and categorization of publications that were included. Two authors chose articles based on their personal experiences. Although the final selection was made by a single author, all studies were verified by other authors to minimize bias. § DISCUSSION AND FINDING In this SLR, we analyzed a total of 68 studies on NARM published between 1996 and 2022. Our analysis revealed several significant findings and trends. Figure <ref> illustrates the distribution of selected articles by publication year, with a notable concentration in 2014 and strong NARM trends in 2019 and 2021. The distribution of primary studies by the method is presented in Figure <ref>. The majority of articles focus on the discretization method, followed by the evolution-based optimization method (Figure <ref>). Statistical and other techniques make up a smaller portion. We further detail the partition of articles by the method as a percentage in Figure <ref>. Table <ref> provides the number of articles published in different journals and conferences. Overall, 60% of the selected articles were published in journals and 40% in conferences. Figure <ref> illustrates the percentage of articles published in journals and conferences. ScienceDirect published 34% journal articles (Figure <ref>) however, IEEE published 41% conference articles (Figure <ref>), which is the highest number and Springer published 25% overall articles (Figure <ref>). We identified several methods for solving NARM, including the discretization method, optimization methods using evolutionary and bio-inspired approaches, the statistical method, and other methods. The discretization method was the most widely used, accounting for 39% of the total articles (Figure <ref>). Evolution-based and SI-based optimization methods covered 32% and 20% of the total articles, respectively (Figure <ref>). The optimization and discretization methods had a greater impact compared to the statistical and other methods. Each method also employs various approaches to address NARM problems. For example, the discretization method includes the clustering approach, which encompasses density-based and grid-based techniques. The partition approach, which involves converting continuous numerical data into discrete values by grouping them into intervals or bins, was found to be simple and easy to implement. However, the choice of interval or bin size can affect result accuracy, and it may not be suitable for datasets with a large number of variables. The optimization method encompasses a range of approaches, such as genetic algorithms, grammar-guided genetic programming, differential evolution, particle swarm optimization, gravity-based algorithms, swarm-based algorithms, Cauchy distribution, and hybrid-based methods. The statistical method is limited and utilizes various distribution scales, including mean, median, variance, and standard deviation. Some studies did not fit into any specific method and were categorized as miscellaneous other methods based on the information-theoretic approach, cognitive computing, and variable mesh. Many algorithms <cit.> based on the discretization method use apriori algorithm for generating association rules. However, the evolution and SI-based algorithms do not use the apriori algorithm. Additionally, certain algorithms under the discretization method have employed new measures, including density measure <cit.>, R-measure <cit.>, Certainty Factor <cit.>, and adjusted difference measure <cit.>. Figure <ref> demonstrates the visual presentation of NARM methods and their algorithms. In Table <ref>, we have summarized the advantages and limitations of each method. Our analysis found that most studies based on the discretization method used synthetic and real-world data to evaluate the effectiveness of NARM algorithms. However, evolutionary and SI-based algorithms mostly used common datasets, such as Quake, Basketball, Bolt, Bodyfat. Furthermore, the Iris dataset was the only one commonly used by discretization, optimization, and statistical methods-based algorithms. It is crucial to note that the choice of the dataset may impact the performance of the methods, and further studies are needed to evaluate the algorithms' practical applicability on real-world datasets. In our extensive review of the literature on NARM, we analyzed various metrics used to evaluate the performance and effectiveness of different algorithms and models. Our study focused on important metrics such as generated number of rules, run time, and value of support and confidence. Support measures the frequency of a specific item set or rules in the dataset, frequently used in conjunction with confidence, which quantifies how likely a certain outcome is given an antecedent. However, it is crucial to carefully interpret and evaluate the reliability and validity of the metrics used, as they can lead to spurious or irrelevant associations if not used properly. Our SLR sheds light on the most commonly used metrics in NARM, including those used by multi-objective algorithms. Multi-objective NARM algorithms consider different objectives simultaneously to generate a set of Pareto-optimal solutions that balance competing objectives. The choice of objective for multi-objective NARM algorithms depends on the research question and data characteristics. Some common objectives include maximizing support or confidence while minimizing the number of rules generated. As a rapidly evolving research area, NARM presents numerous potential future directions for research. These include exploring new scalable optimization algorithms, addressing Big Data challenges, incorporating explainable AI into the mining process, integrating machine learning techniques, addressing security concerns, and using hybrid approaches. By pursuing these directions, researchers can advance the state of the art in NARM and develop more effective and practical solutions for real-world applications. § CONCLUSION This article addressed a significant research gap in the field of NARM and provided readers with a comprehensive understanding of the state-of-the-art methodologies and developments in the domain. Moreover, this study serves as a foundation for future research and offers comprehensive insights for researchers working on NARM-related problems. To achieve this, a comprehensive SLR is conducted based on the guidelines set forth by Kitchenham and Charter. We conducted a detailed examination of a wide range of methods, algorithms, metrics, and datasets sourced from 1,140 scholarly articles spanning the period from the introduction of NARM in 1996 to 2022. Eventually, through a rigorous selection process, including several inclusion, exclusion and quality assessment criteria, 68 articles were selected for this SLR. By providing an exhaustive understanding of the existing NARM methods, highlighting their strengths and limitations, as well as identifying research challenges and future directions, we aim to stimulate innovative thinking and encourage the exploration of novel approaches in NARM. These perspectives include exploring new scalable optimization algorithms, analyzing NARM methods with big data, incorporating explainable AI into the mining process, incorporating machine learning techniques, addressing security concerns, and using hybrid approaches. Subsequently, based on the finding of this SLR, a novel discretization measure is presented to aid in NARM that explicitly addresses the human perception of partitions. The ultimate goal of this review is to inspire and guide researchers in developing more effective and practical solutions for real-world NARM applications. This work has been partially conducted in the project “ICT programme” which was supported by the European Union through the European Social Fund. § DECLARATIONS Conflict of interest Authors declare that they have no conflict of interest. ACM-Reference-Format
http://arxiv.org/abs/2307.02503v1
20230704212651
Natural Language Generation and Understanding of Big Code for AI-Assisted Programming: A Review
[ "Man Fai Wong", "Shangxin Guo", "Ching Nam Hang", "Siu Wai Ho", "Chee Wei Tan" ]
cs.SE
[ "cs.SE", "cs.AI", "cs.CL" ]
Learning ECG signal features without backpropagation [ July 2023 ==================================================== This paper provides a comprehensive review of the literature concerning the utilization of Natural Language Processing (NLP) techniques, with a particular focus on transformer-based large language models (LLMs) trained using Big Code, within the domain of AI-assisted programming tasks. LLMs, augmented with software naturalness, have played a crucial role in facilitating AI-assisted programming applications, including code generation, code completion, code translation, code refinement, code summarization, defect detection, and clone detection. Notable examples of such applications include the GitHub Copilot powered by OpenAI's Codex and DeepMind AlphaCode. This paper presents an overview of the major LLMs and their applications in downstream tasks related to AI-assisted programming. Furthermore, it explores the challenges and opportunities associated with incorporating NLP techniques with software naturalness in these applications, with a discussion on extending AI-assisted programming capabilities to Apple's Xcode for mobile software development. This paper also presents the challenges of and opportunities for incorporating NLP techniques with software naturalness, empowering developers with advanced coding assistance and streamlining the software development process. § INTRODUCTION The advent of Big Code has become increasingly relevant in today's software development landscape as the size and complexity of software systems continue to grow <cit.>. Big Code refers to the vast collection of online software artifacts such as source code repositories, bug databases, and code snippets. It represents a wealth of knowledge and experience that researchers can draw upon to improve the quality and efficiency of their own projects. The goal of Big Code is to build tools and techniques that can assist software engineers to analyze, understand, and make predictions about large codebases in a scalable and efficient manner. Big Code also has the potential to revolutionize artificial intelligence (AI) development by unitizing Big Code data. The development of statistical programming systems involves the utilization of advanced programming languages, powerful machine learning techniques such as large language models (LLMs), and natural language processing (NLP) techniques based on the software naturalness hypothesis <cit.>. This hypothesis posits that computer programs written in diverse programming languages can be comprehended and manipulated similarly to NLP's treatment of human natural languages. By employing this combination of tools, probabilistic models of extensive codebases can be constructed. These systems query a probabilistic model and calculate the most probable predictions to solve a specific challenge <cit.>, which are then presented to the developer. In other words, the programming language is regarded as the natural language for the NLP techniques in this study. There are several crucial areas of fundamental research focused on advancing probabilistic models of “Big Code” using statistical and machine learning methodologies. By considering source code as a series of tokens and leveraging the inherent patterns and structures within vast code repositories, NLP techniques can be developed to enhance AI-assisted programming tasks, including code generation, code completion, code refinement, code summarization, defect detection, and clone detection. AI-assisted programming can enable software engineers to work more efficiently and effectively <cit.>, especially in situations where complex algorithms are being used that involve large amounts of code (i.e., Big Code regime). It also strikes a balance between productivity and ensuring safety, security, and reliability within the programming development environment <cit.>. In fact, this can even lead to the development of AI-based predictive analysis that allows human developers to more easily interact with code using natural language commands and queries as part of the software development process <cit.>. AI-based predictive analysis <cit.> can also more accurately anticipate potential issues throughout the software development life cycle and flag critical incidents <cit.> before they occur <cit.>. Several recent reviews have explored specific topics related to LLMs, such as fairness and bias <cit.>, interpretability <cit.>, explainability <cit.>, and privacy preservation <cit.>. However, this review focuses primarily on language models with software naturalness. In Table <ref>, a detailed comparison of other reviews that have examined related topics is provided. This review also delves into the analysis of the publicly available Big Code dataset, which is designed to assist programming with AI. This review addresses the process of using language models for assessing software naturalness and examines the concept of evaluating language models using entropy. Additionally, the latest developments in AI-assisted programming using transformer-based LLMs trained on Big Code are explored, and both the generation and comprehension aspects are discussed. The review concludes with the open challenges and opportunities in AI-assisted programming. This review paper highlights the unique contributions of this review in comparison to existing reviews. Reviews have emphasized the significance of AI-assisted programming, leading to significant advancements in this critical field of study. However, the essential components of AI-assisted programming have been presented separately, resulting in a fragmented understanding of the topic. Despite this, these independent studies have created an opportunity to view AI-assisted programming from a more comprehensive perspective. In light of this, our survey aims to provide a more structured approach to framing programming that extends beyond the examination of individual research topics. By doing so, this review paper hopes to offer a more comprehensive understanding of this field, highlighting the interdependencies between different areas of research. The remainder of this review article is structured as follows. Section <ref> provides an overview of the background knowledge in Big Code and software naturalness, covering topics such as the available dataset, tokenization process, existing language models, and the measurement of language models using entropy. Section <ref> explores recent applications of LLMs trained with Big Code in AI-assisted programming tasks. Section <ref> discusses the potential challenges and opportunities associated with LLMs in this context. Finally, Section <ref> concludes the study and outlines possible directions for future work in this field. § BACKGROUND §.§ Main Big Code Dataset Researchers have successively released a large amount of Big Code to train LLMs. Most datasets used to train LLMs can be applied into different tasks such as code generation and code summarization. LLMs use unsupervised learning and require large amounts of high-quality and diverse data to achieve high accuracy and generalization in their predictions. Access to large-scale, high-quality, diverse, and representative datasets is essential for developing high-performing LLMs on software naturalness. The datasets found in the literature are described in Table <ref>. §.§ Tokenization Figure <ref> illustrates the pipeline of language models on software naturalness. Similar to other neural networks and raw text, language models cannot process source code directly, so the first step of the standard pipeline is to convert the code inputs into numbers of which the model can make sense. To do this, a tokenizer can be used to split the input into code syntax keyword, variables, or symbols (similar to punctuation) that are called tokens. Each token is mapped to an integer in the next step. These tokens typically correspond to words, punctuation marks, or other meaningful elements of the text. Tokenization is an important step in many NLP tasks, as it allows machine learning algorithms to process and analyze text in a more efficient and meaningful way. Some popular tokenizers are available to be used directly such as Byte-Pair Encoding (BPE) <cit.> and RoBERTa <cit.>. In the tokenization process, each token is assigned a unique identifier or index which can be used to represent the token in a numerical format that can be understood by machine learning models. Different tokenization strategies may be used depending on the specific task at hand, such as splitting text into words, phrases, or even individual characters. One common challenge in tokenization is dealing with ambiguity or variability in the text. For example, words may have different meanings depending on the context in which they appear, or may be misspelled or abbreviated in unpredictable ways. There are various techniques that can be used to address these challenges, such as using contextual information or statistical models to help disambiguate the text. §.§ Language Models on Software Naturalness In this section, some of the leading transformer-based language models are presented. Figure <ref> displays the timeline of the evolution of LLMs since 2018. Table <ref> provides a summary of transformer-based language models used in AI-assisted programming. Transformer-based models are a type of neural network architecture used in NLP and other machine learning tasks. The transformer maintains a similar architecture as the encoder–decoder architecture shown in Figure <ref>, but the models use a self-attention mechanism to weigh the importance of different parts of the input sequence, allowing them to capture dependencies between all parts of the sequence, as shown in Figure <ref>. They can be parallelized more easily than previous models, resulting in faster training and lower inference times. The transformer model is one of the most well-known transformer-based models and has been used in various NLP tasks. Recently, large transformer-based models such as GPT-4 <cit.> and LLaMA <cit.> have achieved state-of-the-art performance in many benchmarks. The transformer's ability to capture long-range dependencies is heavily reliant on dot-product attention with softmax normalization, leading to a quadratic space and time complexity in relation to sequence length, which can be a hindrance for longer inputs. This study focuses on transformer-based models for AI-assisted programming tasks. Encoder–decoder models <cit.> refer to sequence-to-sequence models, utilizing both components of the transformer architecture <cit.>. The encoder's attention layers can access all words in the input sentence at each stage, while the decoder's attention layers can only access the words preceding a given word in the input. Sequence-to-sequence models such as BART <cit.>, T5 (Text-to-Text Transfer Transformer) <cit.>, and TreeGen <cit.> are well-suited for tasks that involve generating new text based on an input, such as code generation, code refinement, defect detection, and clone detection, for AI-assisted programming tasks. Encoder-only models, also known as autoencoders, use only an encoder network to transform input data into a compressed representation. They are commonly used in unsupervised learning tasks such as dimensionality reduction and anomaly detection in NLP tasks. In the past, code embedding approaches could be utilized to obtain the representation from the input data such as Neural Network Language Model <cit.>, Code2Vec <cit.>, ELMo <cit.>, TextRank <cit.>, and GGNN <cit.>. For AI-assisted programming tasks, they are used for understanding tasks to learn useful representations with the BERT <cit.> and RoBERTa <cit.> of data in an unsupervised manner, which can be used as features for downstream tasks such as code translation and code summarization. Decoder-only models, also known as autoregressive models, are a type of neural network architecture used in natural language processing tasks such as GPT-2 <cit.>, GPT-3 <cit.>, GPT-J <cit.>, Reformer <cit.>, and GPT-Neo <cit.>, which use the decoder to predict the next token output given all previous tokens. They rely solely on a decoder network to generate output text, predicting the probability distribution of the next token given the previously generated tokens. Although they are simpler and more efficient than encoder–decoder models, they may not be as effective in tasks requiring a deeper understanding of the input–output sequence relationship. Nevertheless, they are still widely used in various natural language processing tasks for AI-assisted programming, such as code generation and code completion, and have demonstrated impressive performance in several benchmarks. §.§ Measurement of Language Models with Entropy Language models on software naturalness are trained on large code corpora and used to predict the next token in the code given its context. Mathematically, assuming a set of program tokens 𝕋 and a set of program sequences 𝕊, the set of possible systems is S ⊂𝕊. A language model is a probability distribution p(.) over systems s ∈ S: ∀ s ∈ S [0 < p(s) <1] ∑_s∈ S p(s) = 1. An estimated language model known as a pre-trained language model <cit.> is created by computing a maximum-likelihood estimation (MLE) of the parameter of a suitably chosen parametric distribution p(·) given a corpus C of programs C ⊆ S. This process is described in Section <ref>. The tokenization of the code is defined by the programming language to estimate the probability distribution of code tokens given the preceding context. It uses this information to make predictions or decisions in the software engineering tasks. The models are trained to predict the probability distribution of words in a sequence, based on the previous words in that sequence <cit.>. The language model is typically constructed using N-gram models, which have a long history in statistical language modeling and are widely used for estimating the probability distribution of words or characters in a text sequence <cit.>. This was the standard method before the development of word vectors and distributed representations of language using Recurrent Neural Networks (RNN) <cit.>. Given a system s with a sequence of tokens {W_1,W_2,… W_n}, N-gram models can estimate the likelihood of tokens following other tokens. As a result, the model can estimate the probability of s by multiplying a series of conditional probabilities: p(s) = p(W_1)p(W_2|a_1)p(W_3|W_1 W_2)… p(W_n|W_1… W_n-1). An N-gram model captures the co-occurrence patterns of words or characters in the text. Mathematically, an N-gram model can be represented as a set of N-grams, each represented as a tuple of n items and their associated probabilities. The probability of an N-gram can be estimated by the MLE based on the frequency of occurrence of the N-gram in a given training corpus. This also assumes a Markov property, i.e., token occurrences are influenced only by a limited prefix length of n. Thus, for example, in a 3-gram (n=3) model: p(W_i|W_1 … W_i-1) ≅ p(W_i | W_i-2 W_i-1). The probability of a word W_i given its preceding word W_i-1 can be estimated: p(W_i | W_i-1) = count(W_i-1, W_i) / count(W_i-1), where count(W_i-1, W_i) is the number of times the 3-gram (W_i-1, W_i) appears in the training corpus, and count(W_i-1) is the number of times the word W_i-1 appears in the training corpus. The models have achieved great success in recent years and have been a driving force behind recent advancements in NLP. The performance of the technique depends on the quality of the language model and the ability of the model to accurately reflect the patterns and structures of the target data. Therefore, much research effort has been devoted to improving the quality of language models for these tasks, including developing better training algorithms, larger training corpora, and better evaluation metrics. A representative corpus of repetitive and highly predictable programs is utilized to capture regularities within the corpus in order to evaluate the naturalness of software language models. By estimating the language model from this representative corpus, it can predict the contents of new programs with high confidence, thereby minimizing the surprise associated with the new program. In NLP, this idea is often measured using perplexity or cross-entropy (log-transformed version). Given a program p = {w_1,w_2,…,w_n}, of length n, and a language model Θ, it assumes that the probability of the programs estimated by the model is p_Θ, and, thus, the cross-entropy H_Θ(p) can be measured: H_Θ(p) = - 1/nlog p_Θ (w_1,w_2,…, w_n) and a formulation can be derived from Equation (<ref>): H_Θ(p) = - 1/n∑^n_i=1log p_Θ (w_i|w_1,w_2,…,w_i-1). The entropy rate of a language model is utilized to assess the naturalness of the generated text <cit.>. It can be computed by taking the negative logarithm of the probability of each generated token. An effective model should have low entropy for the majority of programs, assigning higher probabilities (i.e., values closer to 1) to most words in the program, thereby resulting in lower absolute log values. In practice, this involves using techniques such as maximum likelihood estimation or neural networks to estimate the parameters. The final model can then be used to make predictions by calculating the probability of a given sequence of words. Estimating entropy from empirical data has been an interesting area in information theory for AI-assisted programming <cit.>. For example, a method for estimating entropy with a confidence interval was proposed in <cit.>. Another method for estimating the entropy and redundancy of a language was provided in <cit.>. A model weighting principle based on the minimum description length principle was applied in <cit.> to develop a direct estimator of the entropy rate. The estimator can be used to estimate a Bayesian confidence interval for the entropy rate using Monte Carlo techniques. Techniques for estimating the entropy rate have been reviewed in <cit.>. Analytical results of estimators for entropy and mutual information can be found in <cit.>. § AI-ASSISTED PROGRAMMING TASKS There are two main categories of AI-assisted programming tasks related to software naturalness: generation and understanding. The former includes code generation, code completion, code translation, code refinement, and code summarization. The latter is concerned with understanding code and includes defect detection and clone detection. Researchers have made significant efforts to enhance the quality of language models for these tasks by improving pre-training schemes, increasing the size of training corpora, developing better fine-tuning datasets, and using improved evaluation metrics. The frameworks and tools developed for these specific tasks are discussed in this section, and a summary of all the frameworks reviewed is presented in Table <ref>. §.§ Code Generation Program synthesis, also known as source code generation, is the process of automatically generating source code from a programming language based on user-specified constraints <cit.>. This study focuses on text-to-code generation for code generation, while code-to-code generation is referred to as code translation, which is discussed The history of code generation dates back to the use of theorem provers to construct a proof of user-provided specifications and extract corresponding logical programs <cit.>. With the increasing popularity of deep learning methods, neural methods, including Long Short–Term Memory (LSTM) <cit.> and Recursive–Reverse–Recursive Neural Network <cit.>, have been adopted to generate output programs with specific inductive biases given sufficient program samples. More recently, transformer-based LLMs such as GPT-3 <cit.> and T5 <cit.> have shown impressive performance in code generation tasks by leveraging contextual representations learned from large amounts of code, as well as public code sources and natural language data, to improve program synthesis. These approaches incorporate systematic pre-training and fine-tuning tasks to develop a deep understanding of code structure and meaning, making them well-suited for software development tasks. To evaluate the models for code generation tasks, different metrics are available such as pass@k <cit.>, which measures the percentage of problems solved using k generated programs per problem, BLEU-4 <cit.>, and exact match accuracy on program synthesis benchmarks such as APPS <cit.>, MBPP <cit.>, and CodeBLEU <cit.>, which consider both syntactic and semantic matches based on code structure in addition to N-gram matches. §.§ Code Completion Code completion, also known as autocompletion, is a software development feature that suggests possible code completions as a programmer types <cit.>. Its goal is to save time and reduce errors by providing suggestions for method names, variable names, and even entire code snippets <cit.>. Previous research on code completion started with statistical language models <cit.>. Later, LSTM-based deep learning approaches were applied to the task, aiming to learn the semantic information of source code without considering its syntactic structure <cit.>. To address the limitations of LSTM-based language models, transformer architecture was introduced for code completion. Normally, the language models for code completion are trained using a causal language model that predicts the unknown token after a sequence of known tokens. Recent work on code completion using LLMs <cit.> has shown impressive performance on benchmarks, such as CodeXGLUE <cit.>, compared to existing statistical language models and deep learning approaches. §.§ Code Translation Code translation is the process of converting code from one programming language to another, with the goal of migrating legacy software. While theoretically possible, building a code translator is challenging due to differences in syntax and platform APIs between programming languages. Most current translation tools are rule-based, requiring handcrafted rewrite rules applied to an abstract syntax tree (AST) derived from the input source code. However, creating such tools demands significant expertise in both the source and target languages. Recent studies have explored using statistical machine translation <cit.> as well as deep learning approaches <cit.> for programming language translation. Quality evaluation for generated functions often uses the BLEU score, while the exact match is used to compare generated output with reference ground truth. §.§ Code Refinement Code refinement, which can be referred to as automated program repair (APR), is the process of automatically fixing bugs or vulnerabilities by converting a buggy function into a correct one. Deep learning models have a strong learning capability that enables them to learn various patterns for transforming buggy programs into patched ones from large code corpora. Many studies <cit.> have demonstrated the superior performance of deep learning-based techniques over traditional template-based <cit.>, heuristic-based <cit.>, and constraint-based <cit.> APR techniques. LLM is used to generate plausible patches or modifications to a given incorrect code. The model can be trained on a large corpus of correct code to learn the patterns and structures of correct code. When LLMs are given a faulty code, the model can then generate suggestions for how to correct it as one of the downstream tasks. The LLMs for code refinement can be evaluated by CodeXGLUE <cit.> or HumanEval <cit.> as the abstracted codes or the classical APR benchmarks such as Defects4J <cit.> and QuixBugs <cit.> as real-world codes, but the understanding and generation of concrete variable and function names is still mandatory and challenging <cit.>. §.§ Code Summarization Code summarization is a technique used to generate English descriptions of code snippets at the function level, which can then be used to generate documentation. Typically, this involves taking the source code as input and producing a natural language summary as output. In AI-assisted programming tools, code summarization can be used to analyze code and identify optimization opportunities, such as using a binary Euclid algorithm instead of a traditional modular arithmetic-based algorithm, which can significantly improve software performance. In recent years, there has been promising research into the automatic generation of natural language descriptions of programs, with studies such as <cit.> making notable progress in this area. The rise of deep learning, coupled with the abundance of data from open-source repositories, has made automatic code summarization an area of interest for researchers. Many of the neural approaches <cit.> use a sequence-to-sequence approach to generate source code summaries, with some models converting the source code into various types of representations, such as token-based <cit.>, tree-based <cit.>, and graph-based <cit.>, before passing it through language models. §.§ Defect Detection As software systems increase in complexity, it becomes more challenging to identify errors. Defect detection aims to enhance software reliability by predicting whether a piece of code is susceptible to bugs or not, by detecting previously unknown errors. Rule-based approaches have been defined in existing defect detection frameworks by inferring likely programming rules from various sources such as code, version histories, Statistical language models based on N-gram language models have also been widely used in this area <cit.>. More recently, many deep learning-based solutions <cit.> have been proposed to bridge the gap by suggesting different feature sets from which the detection framework can learn, attempting to imitate how a practitioner looks for vulnerabilities. However, LLMs, such as CodeBERT <cit.>, have recently emerged as a promising technique in this field due to their ability to understand code structure. These models can be trained on a large corpus of error-free code and used to identify patterns and structures in source code that deviate from those learned from the error-free code as a binary classification task <cit.>. To evaluate the model predictions, accuracy, precision, recall, and F1 scores can be used. §.§ Clone Detection Clone detection involves identifying identical or similar code fragments, known as clones, within or across software systems. The goal of clone detection is to measure the similarity between two code snippets and determine if they have the same functionality. Clones can be classified into four types <cit.>, with types 1–3 being syntactic clones that differ in minor ways, while type 4 clones, known as semantic clones, are difficult to detect since they have different syntax but the same semantics and, thus, require manual validation. With the increasing amount of source code, large-scale and automatic clone detection has become essential. Several tools have been developed to perform clone detection <cit.>, using techniques such as comparison of the AST, tokens, or source code text. Notable clone detection datasets include BigCloneBench <cit.>, which contains Java code snippets. § CHALLENGES AND OPPORTUNITIES §.§ Computational Expense Training an LLM with millions of parameters can be computationally expensive. This is because training involves processing vast amounts of data in codes and optimizing the model's parameters to generate accurate predictions <cit.>. Overall, computational expense can be due to lack of training data and computing resources such as memory, GPU, or even electricity. At the same time, the quality of the training data used to train a language model is also crucial, as poor quality data or bias in the data can lead to incorrect predictions. LLMs require massive computational resources to train, fine-tune, and run, which can be a hindrance for organizations with limited hardware resources <cit.>. To reduce the computational expense of training LLMs, researchers and developers can employ various techniques, such as training on subsets of the data <cit.>, optimizing the hyperparameters <cit.>, and leveraging transfer learning to reuse the knowledge learned from previous tasks. These techniques can help to speed up the training process and reduce the amount of required computing resources. Instead of training the LLMs continuously, some works focus on using prompt-learning <cit.> and human feedback <cit.> to improve performance of the LLMs. In prompt-based learning, the prompt serves as a guide or prompt to the language model, providing it with relevant context and guidance to generate an output that is appropriate for a particular task. The prompt can be a simple sentence or a full paragraph, depending on the complexity of the task and the amount of information needed to guide the LLMs. One of the main advantages of prompt-based learning is its flexibility and ease of use. It allows users to quickly fine-tune pre-trained language models for specific tasks without requiring a large amount of task-specific data. Additionally, prompt-based learning can be used in a semi-supervised or unsupervised manner, where the prompt provides a small amount of supervision to the language model, further reducing the necessary amount of task-specific data. §.§ Quality Measurement Leveraging LLMs in AI-assisted programming tasks has enormous potential to improve software development efficiency and reduce the time and effort required to write code manually. However, several challenges need to be addressed to ensure the performance and effectiveness of LLMs. One of the primary concerns is the quality of the generated code or documentation <cit.>, which can be impacted by the accuracy and robustness of the LLMs. While automated code generation can save time, it can also lead to poor-quality code that is difficult to maintain and may contain bugs or security vulnerabilities <cit.>. Therefore, it is critical to ensure that the generated code meets the desired specifications and adheres to coding standards and best practices <cit.>. Another significant challenge is integrating the generated code into existing software systems seamlessly <cit.>, ensuring that it can be maintained and updated easily over time. To address these challenges and improve the reliability and quality of LLMs in programming tasks, researchers and developers are exploring various approaches and techniques. These include incorporating advanced machine learning and optimization algorithms <cit.> and developing new tools and frameworks for integrating generated code into existing software systems. Some researchers have attempted to use Variational Autoencoders <cit.> or Generative Adversarial Networks <cit.> to generate synthetic data that can be used for training LLMs, but they must ensure that the performance of these generative models is robust and reliable to ensure the quality of the synthetic data. Meanwhile, it is possible to adopt active learning <cit.> to improve the performance of LLMs while requiring fewer labeled training instances. This approach works by allowing the model to choose the data from which it learns <cit.>, which enables it to compute the statistically optimal way to select training data while avoiding poor-quality data, such as buggy codes, that can negatively impact model performance. One of the significant benefits of incorporating active learning into the training process is that it can help reduce the time and effort required to label large amounts of data manually, making it a cost-effective solution for many applications <cit.>. By selecting the most informative data points for labeling, active learning can improve the accuracy and robustness of machine learning models, even when working with limited labeled data. The integration of active learning with LLMs remains an open question in this field of study. While active learning has shown promise in improving the performance of machine learning models, including LLMs, the application of this technique to LLMs has not yet been fully explored. §.§ Software Security Software security is a critical concern in the development of the use of LLMs <cit.>. While LLMs have shown significant promise in a wide range of code-related tasks, they also introduce unique security challenges that must be addressed to ensure safety and security. One of the primary security concerns when using LLMs is the potential for these models to introduce vulnerabilities into the code <cit.>. For example, poorly designed LLMs may generate code that is prone to buffer overflow or SQL injection attacks. Another critical concern is the possibility of LLMs being manipulated or exploited to generate malicious code that can be used for cyberattacks. For instance, an attacker may use a poisoned dataset to manipulate an LLM, resulting in the generation of malicious code that can be used to exploit vulnerabilities in the software system. Also, users without programming knowledge can generate programs with a Trojan horse phishing attack. When using LLMs for AI-assisted programming tasks, it is essential to address software security to ensure that the generated codes or documents are secure and free from vulnerabilities, as well as to ensure the integrity of the training data used to train the LLMs. Code validation and testing involve thorough validation and testing of the generated code before integrating it with real-world systems to identify and fix any security issues. Data sanitization and validation ensure that the training data are free from malicious code or sources of bias. §.§ Software Piracy Software piracy refers to the unauthorized copying, distribution, or use of copyrighted software without the permission of the software's owner <cit.>. This can take many forms, including making copies of software for personal or commercial use, distributing software through unauthorized channels, or using software beyond the terms of the licensing agreement. As the field of natural language generation and statistical machine learning for Big Code and AI-assisted programming continues to grow, concerns over software piracy have arisen. The use of open source code repositories for training AI models has led to lawsuits, with companies such as Microsoft and OpenAI accused of software piracy. The issue at hand is whether the use of open source code for training LLMs violates copyright laws. While the legal implications of this issue are still being debated, it is important to consider the ethical implications as well. The use of copyrighted code without permission raises questions about fairness and equity in the development of AI-assisted programming tools <cit.>. Also, the use of user data to train these models raises concerns over privacy and data protection. As the field continues to evolve, it will be important for researchers and developers to consider these issues and work towards finding solutions that balance the benefits of AI-assisted programming with the need for ethical and legal compliance. This may include clarifying rules around secondary uses of copyrighted code, as well as developing more transparent and opt-in data policies for training AI models. To address software piracy, one approach is to ensure that the training data used for the development of these models are legally obtained and do not violate any copyrights or intellectual property rights according to the U.S. Copyright Office <cit.>. Organizations can also establish clear policies and guidelines for the ethical and legal use of these technologies. For instance, developers can be required to obtain permission or licenses before using proprietary code or software in their work. Machine learning algorithms can also be trained to identify and prevent the unauthorized distribution of copyrighted material and pirated code or software. §.§ Integration with Existing Tools The opportunity to integrate tools and LLMs enhances and streamlines the software development process. By incorporating LLMs into integrated tools as cloud virtual service providers <cit.>, developers can leverage the power of NLP to automate repetitive tasks, improve code quality and readability, and increase efficiency in software development. This integration can enable developers to experiment prompt engineering with public LLMs under data compliance, data security, data governance and best practices directly from their own development environment. Copilot for Xcode <cit.> serves as a real-world example of an application integrated with LLMs, allowing Apple developers to utilize GitHub Copilot <cit.> for code suggestions and ChatGPT <cit.> for code explanation and mutation using natural language. The connection between Xcode and Copilot is achieved by establishing communication between the Xcode source editor extension and the Copilot server, presenting suggestions in a user interface not handled by Xcode. To obtain additional information beyond the source code and file type provided by Xcode, the app utilizes the Accessibility API, which represents objects in a user interface and exposes information about each object within the application. Furthermore, for in-place code editing, the app employs the use of Apple Scripts, a scripting language in macOS for task automation, to programmatically execute extension commands and emulate menu bar interactions. The details to integrate the Copilot with Xcode are illustrated in Figure <ref>. With these workarounds, Copilot for Xcode successfully enables Xcode to support GitHub Copilot, as shown in Figure <ref>. In addition, it facilitates the integration of an external chat panel that can access and read the user's code. This chat panel serves as a connection point to leverage LLMs for functionalities such as code explanation and mutation using natural language. The chat panel can also be extended with plugins to offer additional features, including support for natural language terminal commands. The incorporation of Copilot into Xcode signifies a notable advancement in AI-powered programming for iOS/macOS, expanding the capabilities of language models to widely-used mobile software development tools. § CONCLUSIONS This review paper explores the applications of LLMs in software naturalness to gain a better understanding of software development processes and develop applications that cater to the human aspects of software development. Firstly, it provides a background on Big Code and software naturalness, covering topics such as available datasets, tokenization processes, existing language models, and entropy-based measurements. Secondly, it summarizes recent applications of LLMs trained with Big Code in various tasks, including code generation, code completion, code translation, code refinement, code summarization, defect detection, and clone detection. Lastly, it discusses the potential challenges and opportunities associated with LLMs in the context of AI-assisted programming tasks. Analyzing Big Code repositories and identifying patterns of naturalness can lead to more effective methods for AI-assisted programming. This can ultimately improve the quality and productivity of AI-assisted programming, making it easier for programmers to create high-quality software with fewer errors in less time. In addition to the challenges faced by LLMs for codes mentioned in this review paper, there are significant opportunities for future work in the field. These opportunities include exploring the development of LLMs that prioritize transparency and interpretability, enabling clearer explanations for code suggestions and bug fixing. Emphasizing the design of AI-assisted programming applications that prioritize fairness, transparency, and privacy is crucial, as current research tends to focus primarily on performance and efficiency. By pursuing these avenues, AI-assisted programming applications can be advanced to be more user-centric, ethically responsible, and adaptable, ultimately leading to more efficient and effective § ACKNOWLEDGEMENT This work is supported in part by the Ministry of Education, Singapore, under its Academic Research Fund (No. 022307 and AcRF RG91/22) and Google Faculty Award.
http://arxiv.org/abs/2307.02722v1
20230706021115
Exploring the Angular Momentum - Atomic Gas Content Connection with EAGLE and IllustrisTNG
[ "Jennifer A. Hardwick", "Luca Cortese", "Danail Obreschkow", "Claudia Lagos", "Adam R. H. Stevens", "Barbara Catinella", "Lilian Garrett-Smithson" ]
astro-ph.GA
[ "astro-ph.GA" ]
firstpage–lastpage Massive MIMO with Cauchy Noise: Channel Estimation, Achievable Rate and Data Decoding Ziya Gülgün, and Erik G. Larsson, Fellow, IEEE Z. Gülgün was with the Department of Electrical Engineering (ISY), 58183 Linköping, Sweden. He is now with Ericsson AB, 16440 Stockholm, Sweden (ziya.gulgun@ericsson.com). E. G. Larsson is with the Department of Electrical Engineering (ISY), 58183 Linköping, Sweden e-mail: (erik.g.larsson@liu.se). This work was supported by Security Link and the SURPRISE project funded by the Swedish Foundation for Strategic Research (SSF). A preliminary version of this paper was presented at the International Conference on Communications (ICC), 2022 <cit.>. ==================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== We use the EAGLE (Evolution and Assembly of GaLaxies and their Environments) and IllustrisTNG (The Next Generation) cosmological simulations to investigate the properties of the baryonic specific angular momentum (j), baryonic mass (M) and atomic gas fraction (f_atm) plane for nearby galaxies. We find an excellent agreement between EAGLE and TNG, with both also matching quite well the results obtained with xGASS (eXtended GALEX Arecibo SDSS Survey) for gas fractions greater than 0.01. This implies that the disagreements previously identified between xGASS and predictions from simple analytical disc stability arguments also holds true for EAGLE and TNG. For lower gas fraction (the regime currently unconstrained by observations), both simulations deviate from the plane but still maintain good agreement with each other. Despite the challenges posed by resolution limits at low gas fractions, our findings suggest a potential disconnect between angular momentum and gas fraction in the gas-poor regime, implying that not all gas-poor galaxies have low specific angular momentum. galaxies: evolution – galaxies: ISM – galaxies: kinematics and dynamics § INTRODUCTION Angular momentum is a key property of galaxies, as it is linked to their formation and evolutionary history. It is now known that the stellar angular momentum scales with mass (the so-called Fall relation; ), and that the scatter in the relation is correlated with morphology and stellar structure <cit.>. However, it is still unclear whether stellar structure is the primary physical driver of the scatter in the Fall relation or simply a proxy of the overall accretion history of galaxies. Indeed, from a theoretical point of view, the growth of angular momentum of discs should be tightly connected to their ability to accrete gas <cit.>, potentially implying a more fundamental role of cold gas in driving the scatter of the relation. This is also consistent with theoretical work focused on the link between gas content and disc stability <cit.>. The well-motivated theory behind the connection of a galaxy's gas content and specific angular momentum has been empirically tested with modest observational samples, but both the sample size and manner in which those data are analysed can be improved. The advent of new datasets allows more detailed investigations of the role of gas content in the build-up of angular momentum in galaxies. In particular, <cit.> used the extended GALEX Arecibo SDSS Survey (xGASS, ) sample, a deep survey which is representative of the local Universe, to study the stellar Fall relation. We found that the most strongly correlated parameter with the scatter of the Fall relation is gas fraction, not bulge-to-total ratio, particularly for low stellar masses and when isolating the disc component of galaxies. In <cit.> we expanded this work and investigated the connection between a galaxy's baryonic angular momentum, baryonic mass and atomic gas fraction, and found that a tight plane exists between these three parameters. However, this plane deviates from predictions from simple analytical models of disc stability <cit.>, which predict a steeper slope and, most importantly, a stronger dependence on gas fraction. A similar result was also found by <cit.> studying a sample of ∼100 disk galaxies with resolved HI rotation curves, suggesting that more detailed modelling is needed to fully unveil the physical connection between mass, angular momentum and cold gas content. The natural next step is to extend such a comparison between xGASS and theoretical models to cosmological hydrodynamical simulations, which are not limited by simplifying assumptions of the models above. While recent years have seen a dramatic increase in the number of studies focused on the origin and drivers of the scatter of the Fall relation <cit.>, no work to date has explicitly investigated the baryonic specific angular momentum (j_bar) – baryonic mass (M_bar) – atomic gas fraction (f_atm) plane (hereafter, the JMG plane), quantified its slope and scatter, and carried out a detailed comparison with observations. This is critical not only to obtain some insights into the physics driving this relation, but also to investigate how universal this JMG plane really is. Despite the xGASS sample showing a greater diversity of galaxies than surveys of just disk galaxies, the sub-sample for which angular momentum can be estimated from line widths is still biased towards galaxies that have contents above the detection limit of the survey. Additionally, although xGASS is a large sample in the context of similar observational samples, the number of galaxies is quite small when compared to large simulation volumes. Therefore, in this work, we wish to further test the robustness of this JMG plane with even greater statistics and galaxy diversity by using large hydrodynamical cosmological simulations. First, we aim to test how well the observations and simulations agree in this parameter space. Then we can use the simulation data to better explore the physical connection between these three parameters. As these simulations are not limited to observational constraints, they allow us to probe down to low gas fractions and weaker rotational support than is possible with observational surveys. This paper is set out as follows. We start by describing the archival observational and simulation data used in this work from xGASS, EAGLE (Evolution and Assembly of GaLaxies and their Environments; ) and IllustrisTNG (Illustris The Next Generation, hereafter, TNG; ) in section <ref>. We then explain how we create mock detection samples for the simulations in section <ref>. The results of comparing EAGLE and TNG to xGASS in the M_bar - j_bar - f_atm parameter space is presented in section <ref>. We then discuss the implications of these results in section <ref> and conclude in section <ref>. § DATA DESCRIPTION §.§ xGASS Our observational dataset comes from xGASS <cit.>, which includes galaxies in the stellar mass range of 10^9 to 10^11.5 across the redshift range 0.01<z<0.05 and was selected from SDSS DR6 <cit.>. Galaxies were observed with the Arecibo telescope until detected in or until a gas-fraction limit of 2–10 per cent was reached. The sample, in particular at high stellar masses, was selected to have a nearly flat stellar mass distribution. Overall, xGASS represents arguably the best representative sample of integrated gas properties in the local Universe. In this work, we use the same sub-sample of 564 xGASS galaxies that were used in and . Briefly, this includes only galaxies detected in HI and for which we could accurately determine kinematics; i.e., an inclination greater than 30 degrees and not affected by confusion within the radio beam. We use the stellar and baryonic specific AM that was calculated in and respectively, which are publicly available.[<xgass.icrar.org>] These were estimated by combining stellar mass surface density profiles (within a 10 R_e aperture) with widths. AM was calculated within a large aperture to ensure convergence for galaxies with high Sérsic indices <cit.>. §.§ EAGLE and TNG In this work, we compare our observational results to simulation data from both the EAGLE <cit.> cosmological hydrodynamical simulation and TNG <cit.> magnetohydrodynamical cosmological simulation. For comparisons to EAGLE we use the "reference" EAGLE model and AM values determined in <cit.>, and for TNG we use the quantities obtained by <cit.>. For consistency, we use the 100 comoving Mpc simulation box for both EAGLE and TNG, as they have a comparable number of dark matter particles (2 × 1504^3 and 1820^3, respectively). EAGLE is simulated with smoothed particle hydrodynamics using the gadget-3 code <cit.>, while TNG has discretised gas elements within a moving Voronoi mesh implemented using the arepo code <cit.>. There are various advantages of simultaneously comparing both works to xGASS. First, while both simulations include subgrid models that calculate feedback (from stars and accreting black holes), gas cooling, star formation and black hole growth, the details of the modelling are dramatically different with, for example, AGN feedback being kinetic in TNG and using thermal energy injection in EAGLE (see for further details). Second, the way the subgrid models are calibrated is different. EAGLE calibrates its subgrid models on two scaling relations; the z = 0.1 galaxy stellar mass function and the size–mass relation of disc galaxies. In contrast, the primary scaling relations used to calibrate TNG subgrid models are the cosmic SFR density history, z = 0 galaxy stellar mass function and the z = 0 stellar–halo mass relation, with additional scaling relations used as secondary constraints (the black hole–bulge mass relation, the gas fraction of haloes within R_500c and the stellar size–mass relation, all at z = 0). Third, the way <cit.> and <cit.> calculated AM is different, as we describe below. Both <cit.> and <cit.> calculated AM and associated quantities, for the same selection of galaxies; z = 0 and M_⋆ > 10^9 M_⊙, (10 803 galaxies for EAGLE and 20 876 for TNG). EAGLE and TNG do not model atomic gas directly, instead, they determine it in post-processing. We use the gas mass from <cit.> and <cit.> that were calculated using the <cit.> theoretical model. This model determines molecular hydrogen fractions as a function of the total column density of neutral hydrogen, metallicity and the density of the stellar disc, and then uses this to infer atomic hydrogen content (for more details of this model see ). f_atm is then defined to be 1.35 M_HI / M_bar, the same as our definition for xGASS (the factor of 1.35 adds the approximate contribution from Helium). We calculate the baryonic AM as the sum of the AM from stars, and H_2. In both <cit.> and <cit.> AM is calculated within apertures to be comparable to observations. From <cit.>, we use the AM calculated within 5R_e (half-mass radius of stars). In <cit.> they calculate AM within what they define as the "BaryMP" radius, which is the radius where the gradient of the cumulative baryonic mass profile converges <cit.>. This BaryMP radius is 6.9 R_e on average. In , for xGASS, we calculated AM within an aperture of 10R_e to ensure all of our galaxies had their AM converged. As R_BaryMP varies for each galaxy to a radius where the baryonic mass converges, the AM values determined for TNG will also likely be converged and comparable with xGASS. Additionally, if the equations of AM are solved analytically with a single Sérsic profile <cit.>, a galaxy with a Sérsic index less than 2 will have its AM converge by 5R_e (). Therefore, for the majority of galaxies, the AM determined within 5R_e or 10R_e will be comparable and we assume that the values determined in <cit.> are appropriate to compare with our xGASS values. As we will show in section <ref>, despite these differences, the agreement between the two simulations and xGASS is striking, highlighting how none of the differences in the way the key parameters investigated here significantly affect our analysis. § MOCK DETECTION SAMPLE The left column of Fig. <ref> shows the gas fraction of all galaxies in EAGLE (top row) and TNG (bottom row). The medians for each simulation are represented by blue and magenta dashed lines (for EAGLE and TNG, respectively). These can be compared to the weighted median of xGASS in black (these are taken from and in this figure are weighted to recover a volume-limited sample and include non-detections). Both EAGLE and TNG agree with xGASS for intermediate stellar masses (∼ 10^10 M_⊙ to 10^11 M_⊙), but EAGLE galaxies at low stellar masses are more gas–poor than xGASS, while high–stellar–mass TNG galaxies are slightly more gas–rich than xGASS (consistent with what was found by ). However, it should be noted that the xGASS sample selection is very different from that of the simulations. The xGASS sample was deliberately selected from SDSS (Sloan Digital Sky Survey, ) to over-sample high stellar masses, which is different to the volume-limited samples of both EAGLE and TNG. This can be seen in the stellar mass distributions shown in the left column of Fig. <ref>, where the full xGASS sample is shown as the hollow black histograms, while EAGLE and TNG are blue and magenta respectively (in this figure the xGASS sample have not had the stellar mass weighting applied). To carry out a more fair comparison between xGASS, EAGLE and TNG, we extract from the simulations a sample that has the same gas fraction limit and stellar mass distribution as xGASS. This can be seen in the right column of Fig. <ref> and <ref>. This reduces our EAGLE sample from 10,803 to 7,037 galaxies and our TNG sample from 20,876 to 14,919 galaxies, (although it should be noted that 26 EAGLE galaxies and 3,201 TNG galaxies had no mass in , so are not shown in the left panel of Fig. <ref>). Once the HI detection cut is applied, unsurprisingly the agreement between xGASS and the simulations improves. In the right column of Fig. <ref> both samples follow approximately the same distribution with differences now reduced to 0.4 dex or less, primarily at low stellar masses. The right column of Fig. <ref>, EAGLE and TNG now overlap with the xGASS sub-sample distribution by construction. Throughout this paper when comparing xGASS to EAGLE/ TNG, we will show both the full sample and the "mock detection sample" ( detection cut and stellar mass weighting). This allows us to distinguish between physical differences and selection effects. § RESULTS §.§ Fall Relation We first compare TNG and EAGLE to xGASS observations in the mass–specific AM relation (Fall relation; ). In Fig. <ref>, we show the stellar Fall relations (Panel a) and baryonic Fall relations (Panel b) for both simulations. The left columns show all of the simulated galaxies without a selection cut, while the right columns are only the galaxies in the simulation above the xGASS detection limit (mock detection sample). Each axis shows the simulation running median as the coloured dashed line, and in the right column we show a comparison to the xGASS observational sample in black. We present both columns, to show the effect the sample selection has in this parameter space. In both the stellar and baryonic cases, the mock detection sample has fewer low AM galaxies than the full samples for both simulations. This results in a tighter distribution of galaxies around the median for the mock detection sample. The median AM for low stellar mass galaxies is also higher for the mock detection sample than for the full sample. In the right column, both simulations show good agreement with the xGASS median for M_⋆ > 10^10.3 M_⊙ (i.e., less than 0.1 dex difference), which then increases to a maximum discrepancy at M_⋆ = 10^9 M_⊙ (0.35 dex for TNG and 0.42 dex for EAGLE). We note that at stellar masses below ∼ 10^10 M_⊙ the resolution of both simulations has an impact on the AM measurements determined <cit.>. Therefore, the disagreement at low stellar masses should not be over-interpreted, and we conclude that the agreement between xGASS and the simulation data are reasonable for the mock detection sample. We also show the baryonic Fall relation in Fig. <ref>. We note a better agreement between xGASS and the simulation data for the baryonic Fall relation than the stellar Fall relation. Without any selection cuts (left column) for EAGLE, there is a maximum discrepancy of 0.42 dex at 10^11 M_⊙, and less than 0.2 dex difference for M_bar < 10^10.6 M_⊙. TNG has less than 0.2 dex discrepancy at all baryonic masses. When the mock detection sample is considered (right column), any disparities between either of the simulations and the xGASS observations are effectively eliminated. Specifically, TNG and EAGLE have discrepancies of less than 0.1 dex and less than 0.2 dex at all baryonic masses, respectively. This better agreement for the baryonic Fall relation is especially important for this work, as the remainder of the analysis will focus on the baryonic j. §.§ j_bar–M_bar–f_atm plane In this subsection, we now compare both EAGLE and TNG to the j_bar - M_bar - f_atm plane, which was found for xGASS in . The top row of Fig. <ref> shows j_bar against M_bar in four evenly log-spaced f_atm bins. Each column shows medians (in 0.3 dex M_bar bins) of the EAGLE and TNG galaxies in that atomic gas fraction bin. The median in each bin is shown by a blue (EAGLE) or magenta (TNG) line, and the shaded regions show the range of the 16th to 84th percentile for each bin. For comparison, the black lines show the xGASS JMG plane at fixed gas fractions. We show these lines for the gas fractions at the bin edges of each of the columns (f_atm = 0.01, 0.03, 0.1, 0.3 is solid, dashed, dotted and dot-dashed, respectively). In the bottom row of Fig. <ref> we show the residual in dex of these medians with respect to the midpoint of the xGASS JMG plane lines. In Fig. <ref> we only show the mock detection sample, as, in this projection and gas fraction range, there is very little difference between the full simulation sample and the mock detection sample. The mock detection sample is simply a cut in gas fraction with a stellar mass weighting applied. Therefore, the only difference that can be seen between the two samples, is more galaxies in the first column for the full simulation samples that extend to lower baryonic masses. However, for completeness in appendix <ref> we also show the spread of the galaxies in this parameter space as 2D histograms for both the full sample and the mock detection sample, (EAGLE and TNG are shown as Fig. <ref> and <ref> respectively). Regardless of whether the full sample or mock detection samples are used, for gas fractions greater than 0.01, the slope of the JMG plane is in excellent agreement for both the simulations and xGASS. The two middle panels of Fig. <ref> have medians that overlap with the xGASS plane, and the left and right panels have slopes that are consistent within their scatter (shaded region). There is less than 0.2 dex offset of the simulation medians from the allowed region of the xGASS JMG plane. There are small offsets in normalisation, but these are consistent with the offsets observed in gas fraction and j in Fig. <ref> and <ref>. In many ways, the agreement between EAGLE, TNG and xGASS in this parameter space is remarkable. Firstly, it is interesting that both simulations are in good agreement with each other (maximum difference of 0.23 dex between the two simulations' medians) given that they rely on different codes and subgrid physics prescriptions. Often these differences will result in the simulations having slightly different predictions, such as the fraction – stellar mass relation (Fig. <ref>), where low stellar mass EAGLE galaxies are more gas poor than TNG galaxies. However, when gas fraction, baryonic mass, and baryonic specific AM are all considered together, as we have in Fig. <ref>, there is very little difference between the two simulations' predictions. Secondly, it is also intriguing how well these simulations agree with the xGASS JMG plane. In particular, this agreement is strongest when considering j_bar, M_bar and f_atm together, rather than when isolating either the – stellar mass relation (Fig. <ref>) or Fall relation (Fig. <ref>) separately. It should be emphasised that, despite EAGLE and TNG having their subgrid models calibrated against many observational scaling relations <cit.>, this is likely not the cause of the tight agreement seen between the simulations and observations in Fig. <ref>. First, although both simulations are calibrated to reproduce the observed stellar mass–size relation, which is closely linked to the Fall relation, Fig. <ref> and <ref> show that the simulated Fall relations have larger discrepancies between the simulations and observations than is seen for the JMG plane in Fig. <ref>. Therefore, the simulations calibration to reproduce the stellar mass–size relation is not the sole cause of the tight agreement between the JMG plane and EAGLE/TNG. Second, neither of these simulations are calibrated to reproduce cold gas content, as the majority of the subgrid models calibrate for stellar content. Therefore, the tight agreement seen between simulations and observations in the j_bar, M_bar and f_atm plane is not due only to the simulations calibration, and is instead a prediction of the models. We also chose to fit a JMG plane directly to the simulation data using the hyper-fit <cit.> Bayesian hyperplane fitting tool, as we did for xGASS data in . As the full simulation data are heavily skewed in their baryonic mass and AM distribution, we choose to only fit a JMG plane to the mock detection sample, (we explore this skewness more in section <ref>). The best fitting parameters of a JMG plane with the form log_10(j_bar) = αlog_10(M_bar) + βlog_10(f_atm) + γ are shown in table <ref>. These values can be compared to those found for in the bottom row of the table. We see that the TNG JMG plane parameters are within errors of the xGASS JMG plane (except for the β parameter), while the EAGLE simulation parameters deviate by more than 3σ. Although, it should be noted that the errors provided on these parameters are uncertainties from the MCMC (Markov Chain Monte Carlo) chain and do not incorporate any uncertainties on individual galaxy values, so will likely be an underestimate of the true error. The projection of both of these fits are shown in Fig. <ref> and <ref>, where it is clear that the best fit is not always an accurate representation of the data distribution, in particular at low gas fraction, so that median values provide a more fair comparison. As we will see in the next section, this is also because at low gas fractions the JMG plane may no longer be able to properly describe the distribution of galaxies in both EAGLE and TNG. We can also compare the standard deviation of galaxies from the JMG plane in the j direction, which is given in the right column of table <ref>. The spread of galaxies is the smallest in xGASS, with TNG being marginally larger and EAGLE being ∼50% larger. In an attempt to better understand these differences, in the next subsection, we look into the scatter around the JMG plane in more depth. §.§ Scatter from the JMG plane In addition to studying the shape and slope of the JMG plane, it is also important to investigate the scatter around it. The offsets of galaxies from the xGASS JMG plane in the j direction are shown in the left and middle panels of Fig. <ref>. The only difference between the two panels is that the left panel shows all galaxies within either the EAGLE or TNG simulations (blue and magenta histograms, respectively), whereas the middle panel shows only galaxies that are in the mock detection samples. When all galaxies in the simulations are considered, the offsets are strongly positively skewed. This is due to the simulations containing galaxies that have extremely low gas fractions, while still maintaining a moderate baryonic AM. In fact, the majority of the galaxies in this long extended tail have gas fractions that are at or below the limit of where they would be considered accurate. One way to determine if a gas fraction is "accurate" is to count the number of gas particles each contributing at least 5 per cent of their mass to neutral gas. The blue hollow histogram shows the result of excluding all galaxies with less than 10 gas particles that reach this criterion for EAGLE. This illustrates that only a small fraction of galaxies in this extended tail have a sufficient number of gas particles to be reliable. Therefore, when considering the tail of the entire sample, the precise position of a galaxy with respect to the plane should be regarded as an approximation. We will further explore this result in section <ref>. Although this extended tail is dominated by galaxies with uncertain gas masses, there is still a small number of galaxies with reliable gas masses and high offsets. This could indicate that despite gas fraction being a strong predictor of a galaxy's baryonic AM for galaxies with a gas fraction greater than ∼0.01, this relationship breaks down for galaxies with lower gas fractions. In practice, this is unsurprising as, by construction, the dominant mass component will set the baryonic AM of a galaxy. In the gas-poor regime, the gas is no longer the dominant mass component, therefore the JMG plane will be inaccurate. However, the interesting point to note is that not all galaxies in EAGLE and TNG will be slow rotators once they have depleted their gaseous reservoir. We will explore this more in section <ref>, but to summarise, this has two implications; first, the JMG plane is only valid for galaxies that have a significant component and second, the physical connection between baryonic AM and gas fraction is not universal for all galaxies. In summary, the xGASS JMG plane is only applicable to galaxies with a gas fraction greater than ∼0.01. This limit is close to the limit applied to create our mock detection sample. In the middle panel of Fig. <ref>, we show that once galaxies with low gas fractions are removed from the simulation samples, then the long positive skewed tail is removed. The offsets now have a distribution that is much closer to a normal distribution but it is not centred on zero. This can be seen by the values printed in the top left corner, which show the mean (μ) and standard deviation (σ) of a Gaussian fit to these offsets, (in blue and magenta for EAGLE and TNG respectively). This shows that both simulations on average lie ∼0.1 dex below the JMG plane. The standard deviation of these offsets also implies that EAGLE has a larger spread around the JMG plane (σ = 0.23) than TNG (σ = 0.18). In the right column of Fig. <ref>, instead of showing both simulations' offsets from the best-fitting JMG plane to xGASS, we now compare the offset of the simulations from the best fitting JMG plane to their own data. Although this does not significantly affect the standard deviation of these offsets, they are now centred on zero. We also compare the offsets of the xGASS galaxies from the xGASS JMG plane in the middle and right panels of Fig. <ref> with the black hollow histograms. The xGASS observational vertical scatter is σ =0.15. This is similar to the spread found for TNG (σ = 0.17) but is smaller than the EAGLE spread (σ = 0.23). When we calculated the xGASS JMG planes' σ, we did not attempt to calculate an intrinsic scatter (which takes into account the observational errors of the values), meaning this should instead reflect a combination of measurement errors as well as intrinsic variations of galaxies from the JMG plane. This was due to us not being confident in the exact values of our errors. For the simulations, it is also hard to determine the exact errors on each galaxy's AM, as there is error associated with splitting the gas particles into phases (as we will elaborate on in section <ref>), the uncertainty in the assumptions used to determine AM and particle shot noise. Therefore, it is unclear if the difference in scatter seen between the observations and simulations is statistically significant. §.§ JMG plane offsets and their relationship to star formation rates The next science question that simulations and their increased statistics allow us to explore is; "Is there any residual dependence on SFR that the JMG plane does not encapsulate?" To address that question we look at the distribution of galaxies with respect to the JMG plane and their ΔMS (offset from the star-forming main sequence). We attempted to do this analysis with xGASS in but were limited in our statistics, so couldn't draw any strong conclusions. Given that these simulations have a factor of 20 higher statistics, this should not be an issue with these simulation data. In Fig. <ref> we investigate whether variations above and below the star-forming main sequence influence a galaxy's position relative to the JMG plane. In this figure, we show EAGLE (top row) and TNG (bottom row) offsets for the mock detection sample (same as those in the right panel of Fig. <ref>). This shows offsets from either the EAGLE or TNG JMG planes (for the top and bottom rows, respectively). Overlaid on these background histograms are subsamples of galaxies that are either 0.5 dex above (blue) or below (red) the star-forming main sequence. The main sequence is re-defined for each simulation's mock detection sample using the curved main sequence from <cit.>.[This result qualitatively does not change irrespective of whether a linear or curved main sequence is used.] In both panels, the mock detection sample, above the MS sub-sample and below the MS sub-sample have a similar distribution around the JMG plane. This result qualitatively does not change if we consider offsets from the JMG plane in the f_atm direction. This implies that variations in SFR around the main sequence do not seem to be mirrored by variations around the JMG plane, and vice-versa, and that structure and SFR are not directly influencing each other. § DISCUSSION Our analysis has shown that in both EAGLE and TNG, galaxies with gas fractions greater than 0.01 lie on a M_bar - j_bar - f_atm plane that is remarkably similar to the empirical one found in . Therefore, the tension we found in between the <cit.> gravitational stability model (hereafter model) is also true for EAGLE and TNG. The model has the same qualitative trends as our simulation data but the exact exponents of the model differ from what we find for EAGLE and TNG. As we already discussed in , the model makes several simplifying assumptions for the sake of an analytical argument. These assumptions result in the model predicting a single profile shape given a fixed q:= j_barσ/G M_bar. By using the increased statistics of the simulation data, and its ability to accurately resolve cold gas profiles, we find that the profiles have additional baryonic mass dependence which is not encapsulated in this model. A similar result was found in with halo mass. An example of TNG profiles within a small q-interval is shown in Fig. <ref>. This shows that different baryonic masses exhibit differently shaped profiles. The discrepancy between and the cosmological simulations might be due to this non-trivial scale-dependence. The agreement between xGASS, TNG and EAGLE suggested that we can use the simulations to gain a deeper understanding of the properties and shape of the JMG plane and its implications for galaxy evolution. As we have shown in Fig. <ref>, galaxies in the M_bar–j_bar–f_atm parameter space are well described by a plane (in log space) in both simulations for gas fractions above 0.01. However, below this threshold there is an indication that galaxies may deviate from the JMG plane, as illustrated by the long tail in the distribution shown in the left panel of Fig. <ref>. Galaxies with very low gas fractions have higher baryonic specific AM than predicted by the JMG plane. Although, it should be noted that these galaxies have gas fractions that are at the limit of what would be considered reliable (see results section). Since the simulations do not directly model gas phases, the mass of a galaxy is determined in post-processing, with each gas particle assigned a percentage for neutral and then atomic gas. For galaxies to have such low gas fractions, they have either a small number of gas particles contributing to the atomic mass and/or each gas particle contributes a very small percentage of its mass to atomic gas. In the very gas–poor regime, potential errors in the gas phase splitting and particle shot noise add significant statistical and systematic uncertainties to the gas fraction. To address this, in Fig. <ref> we also show the blue hollow histogram for a sub-sample of galaxies in EAGLE, where uncertain masses are removed. In this figure, we define this sub-sample as galaxies that have at least 10 gas particles, each with at least 5% of their mass in neutral gas. This approach provides a more conservative representation of the distribution of EAGLE galaxies around the plane, now showing only a faint indication of a tail with large offsets.[We investigated variations in the percentage of neutral mass required for gas particles to be considered accurate (10% and 25%) as well as the number of particles requiring this percentage (20 and 30) and obtained qualitatively similar results.] Despite the majority of galaxies within this long extended tail having uncertain masses, there are still 38 galaxies with offsets greater than 1 dex and reliable gas fractions. This provides speculative evidence that gas fraction may not accurately predict a galaxy's baryonic AM in the gas–poor regime. It is worth noting that although the gas fractions of most gas-poor galaxies are uncertain (and consequently, their exact offsets from the JMG plane are uncertain), the galaxies as a whole are still resolved, making it unlikely for them to become gas–rich if a higher resolution simulation was conducted. In addition, as their j_bar and M_bar are dominated by stars, these values can be considered accurate. We find that gas–poor galaxies maintain a similar j_bar – M_bar distribution, once they fall below a gas fraction of 0.01. Therefore, excluding all of these gas–poor galaxies from our analysis limits the conclusions that we can draw and is somewhat unnecessary. To address this, in Fig. <ref>, we adopt a slightly different reliability measure; we assume that all galaxies with a total mass greater than the mass of one gas particle to be accurate (for EAGLE M_gas particle = 1.81 × 10^6M_⊙ and for TNG M_gas particle = 1.4 × 10^6M_⊙). Galaxies with masses below this threshold are assigned an upper limit of MHI = M_gas particle. This alternative criterion obtains a similar result to setting a limit based on the number of particles, despite not explicitly checking for an adequate number of gas particles, and allows us to easily apply the same condition to both EAGLE and TNG. Fig. <ref> shows galaxies with f_atm > 0.01, with the left panel showing all galaxies with M_HI > M_gas particle and the right panel the galaxies with masses set to the upper limit. This figure shows that the distribution of f_atm < 0.01 galaxies is preferentially above the JMG plane. M_HI > M_gas particle galaxies are, on average, 0.12 and 0.24 dex above the JMG plane, while galaxies set to the upper limits are 0.53 and 0.62 dex above the plane (for EAGLE and TNG, respectively). In other words, galaxies with low gas fractions possess higher AM at a fixed mass and gas fraction than expected for that gas fraction. We expect that the true distribution of gas–poor galaxies around the plane would be somewhere in between the distributions seen in Fig. <ref> and Fig. <ref>. Once upcoming cosmological simulations that model gas phases directly become available, we will be able to determine the exact relationship between AM and gas content in the gas–poor regime. This result is noteworthy, as we see that a considerable number of galaxies have moderate angular momentum values despite having little-to-no gas. If all gas-poor galaxies were slow rotators, the JMG plane would still hold in the gas-poor regime, which is not the case. Therefore, despite gas being a strong indicator of a galaxy's angular momentum in the gas-normal regime, it breaks down in the gas-poor regime. A similar result was found in where we showed that the scatter of the stellar Fall relation was strongly correlated with gas fraction at low masses (M_⋆ < 10^10.25 M_⊙), even more so than bulge-to-total ratio. However, when we looked at the high mass regime (M_⋆ > 10^10.25 M_⊙) then gas fraction became less dominant, with bulge-to-total ratio having a slightly stronger correlation with scatter than gas fraction. It is not surprising that both EAGLE and TNG indicate the presence of a substantial population of passive galaxies with minimal gas but significant angular momentum. Seminal studies of the Virgo cluster already showed that a large fraction of passive galaxies are structurally more similar to discs than ellipticals <cit.>. More recently, the advent of integral field spectroscopic (IFS) surveys has firmly established that the vast majority of passive galaxies show stellar angular momenta not too dissimilar from those observed in star-forming galaxies <cit.>. This clearly highlights how star formation quenching and major structural transformation are two separate (and not always associated) processes in the evolution of galaxies. The fact that the JMG plane is not universal and valid only for a sub-sample of the local galaxy population does not reduce its importance for galaxy evolution studies. As both EAGLE and TNG implement their star formation and feedback processes differently, the fact that both simulations agree so well with each other could imply that the JMG plane is primarily set by gravitational processes and how cold gas settles in galaxies and reaches equilibrium, rather than any major connection with the way is used for star formation. The idea that the shape of the JMG plane is disconnected from the gas–star formation cycle in galaxies (i.e., both quenching and star-forming stages) is further supported by the lack of any correlation between a galaxy's offset from the j_bar - M_bar - f_atm plane and its position with respect to the star-forming main sequence. In other words, galaxies that have higher SFR with respect to the main sequence, are not preferentially above or below the JMG plane and vice versa. The trends shown in Fig. <ref> are qualitatively the same if offsets are calculated in the f_atm direction. This implies that the physical processes causing an increase (or decrease) in SFR are not driven by processes that cause an increase (or decrease) in atomic gas fraction with respect to the j_bar - M_bar - f_atm plane. This is interesting given that ΔMS is strongly correlated with offsets from the M_⋆ - f_atm relation, <cit.> and we see a similar correlation for the offsets from the M_bar - f_atm relation for both EAGLE and TNG. We can speculate on potential scenarios that could cause the excess cold gas (with respect to the JMG plane) to not be correlated with SFR, such as a large ring of stable in the outskirts of a galaxy, which could result in an increase in gas fraction, without triggering a starburst event. However, more work is needed to determine the process (or processes) driving the scatter of this JMG plane, and why it is disconnected from galaxies star formation rates. § CONCLUSIONS This study presents a comprehensive comparison between the j_bar - M_bar - f_atm plane for xGASS data presented in , and cosmological simulation data from EAGLE and TNG. We compared all the galaxies in each simulation volume, and mock detection samples, to determine how sample selection could be affecting our results. We summarise our main conclusions as follows: * The j_bar - M_bar - f_atm plane found for the xGASS sample, is consistent in both orientation and scatter with the EAGLE and TNG mock detection samples and full simulation samples, for f_atm > 0.01. * There is moderate evidence that for gas fractions below f_atm∼ 0.01, the simulations deviate from the empirical JMG plane, asymptoting towards a constant j_bar - M_bar distribution that no longer depends on f_atm. * The scatter in this JMG plane is independent of ΔMS (for f_atm > 0.01), suggesting that the processes causing deviations from the star-forming main sequence do not affect the processes causing deviations from the JMG plane. It would be interesting for future works to investigate tracking these simulated galaxies through different redshift snapshots to see if this gives insights into the drivers of scatter in this JMG plane and the factors contributing to its deviation at low gas fractions. § ACKNOWLEDGEMENTS JAH and LC acknowledge support from the Australian Research Council (FT180100066). Parts of this research were conducted by the Australian Research Council Centre of Excellence for All Sky Astrophysics in 3 Dimensions (ASTRO 3D), through project number CE170100013. DO is a recipient of an Australian Research Council Future Fellowship (FT190100083), funded by the Australian Government. ARHS is funded through the Jim Buckee Fellowship at UWA. We acknowledge the Virgo Consortium for making their simulation data available. The EAGLE simulations were performed using the DiRAC-2 facility at Durham, managed by the ICC, and the PRACE facility Curie based in France at TGCC, CEA, Bruyeres-le-Chatel. § DATA AVAILABILITY All of the xGASS data used in this work are publicly available at <www.xgass.icrar.org>. The EAGLE simulations are publicly available; see <cit.> for how to access EAGLE data. For access to the TNG data used here, please contact ARHS. Otherwise, the public-facing TNG database has similar – although, not identically calculated – galaxy properties available at <www.tng-project.org/data/> mnras § DETAILED INVESTIGATION OF THE SIMULATION PLANES For conciseness, in Fig. <ref> we present only the comparison between the xGASS JMG plane and simulations for the binned medians of the mock detection sample. For completeness, in this section, we also show the full EAGLE and TNG samples, as well as showing the 2D histogram distribution of the galaxies. This is shown in Fig. <ref> and <ref> for EAGLE and TNG respectively. We chose to only show the mock detection sample in Fig. <ref> because, in the gas fraction range that is greater than 0.03, the binned medians of the full sample and the mock detection sample are almost identical, which can be seen when comparing the top and bottom rows of Fig. <ref> and <ref>. The only differences are seen for 0.01 ≤ f_atm≤ 0.03, where there is a smaller range in M_bar for the mock detection sample. However, the qualitative agreement with the JMG plane is similar for both the mock and full samples. Fig. <ref> and <ref> also highlight that the EAGLE simulation has a much larger spread in j_bar values (at fixed baryonic mass and atomic gas fraction) than TNG. This is most evident when comparing the full samples of both simulations, but is also seen when comparing the mock detection samples. This large spread in the data for the full simulation samples, and in particular, the non-Gaussianity of the scatter, which, when combined with the asymptotic behaviour of galaxies approaching a fixed j_bar at extremely low gas fractions, meant that we could not fit a JMG plane directly to the full simulation data. hyper-fit <cit.> is a Bayesian fitting tool that assumes data to be normally distributed around the JMG plane. Although the code will give a mathematically correct solution even when the data is not normally distributed, this solution is not physically meaningful. Therefore, we do not show the fits to the full simulation data in this work. As the mock detection sample is closer to a Gaussian distribution, we fit a JMG plane to these samples. The parameters of this fit are given in table <ref> and shown in the bottom rows of Fig. <ref> and <ref> as blue and magenta hashed regions (for EAGLE and TNG respectively).
http://arxiv.org/abs/2307.00902v1
20230703095530
Multi-messenger Observations of Tidal Disruption Events
[ "Simeon Reusch" ]
astro-ph.HE
[ "astro-ph.HE" ]
Magnetic lump motion in saturated ferromagnetic films Ji Lin August 1, 2023 ===================================================== § TIDAL DISRUPTION EVENTS Tidal Disruption Events (TDEs) have already been predicted in the 1970s <cit.>. The underlying idea could come from a curious child's mind: What happens if a star orbiting the supermassive black hole (SMBH) at the center of a galaxy falls into the black hole? At first glance, the answer is straightforward: When the orbiting star gets close to the black hole, the resulting tidal forces from the black hole grow larger than the star's self-gravity and destroy it <cit.>. Roughly half of the star's mass gets accreted around the black hole and the bright electromagnetic flare resulting from this accretion can last for months. However, the exact mechanisms at play which cause this emission are still broadly discussed. To understand them, we need observational data. The first few TDEs have been discovered in the 1990s and early 2000s in X-ray wavelengths. For a long time, none were detected in optical searches. Only in the last decade that suddenly changed <cit.>, thanks to the advent of optical all-sky surveys such as Pan-STARRS <cit.>, ASASSN <cit.>, Gaia <cit.> or the Zwicky Transient Facility (ZTF) <cit.>. The rising number of optically discovered TDEs can be seen in Fig. <ref>. The large field of view and high cadence of these survey telescopes makes them the ideal discovery tools for TDEs. As of December 2022, roughly 100 TDEs have been discovered in total, with more than 30 detected during the 2.6 years of ZTF Phase I alone <cit.>. The observed continuum emission of TDEs can be approximated by a blackbody fairly well. However, the temperature distribution seems to be bimodal, with one population peaking in optical/ultraviolet (UV) wavelengths, and a second X-ray bright population best described by blackbodies with higher temperatures (see <cit.> for a review). As the newly formed accretion disk from a TDE will most likely be very hot, it is not clear where the optical/UV emission comes from. Also, the blackbody radii inferred by this emission exceed the newly formed accretion disk's size by 1-2 orders of magnitude. So far, multiple models have been proposed to explain the optical/UV light, such as semi-relativistic outflows or winds, as well as shock acceleration stemming from the tidal stream intersecting itself. If the debris resulting from the tidal disruption is rapidly circularized, the bimodal distribution (X-ray vs. optical/UV) might be reconciled by a unified TDE model, as proposed in <cit.>. In this model, the most prominent wavelength detected depends on the viewing angle, as shown in figure <ref>: When one looks into the funnel perpendicular to the disk, one can see the X-rays from the inner disk. From a side-on view, the X-rays will be obscured and one can only see the optical and UV emission stemming from X-rays reprocessed in the outer disk or in outflows. Intermediate viewing angles will produce a mixture of both signals. Additionally, around 1% of TDEs launch relativistic jets, like e.g. the recently discovered AT2022cmc <cit.>. Such jets (labeled as (1) in Fig. <ref>), as well as the possible winds or outflows (2), or a potentially present disk corona (3), have been proposed as production sites for high-energy neutrinos, making TDEs promising candidates for multi-messenger emission. In all of these scenarios, protons are efficiently accelerated to high energies and subsequently interact with other protons or a photon target field (pp or pγ interaction) <cit.>. § OUR SEARCH FOR SOURCES OF HIGH-ENERGY NEUTRINOS Detecting neutrinos is notoriously hard, as they only interact via the weak nuclear force. In most cases, they are detected indirectly using large reservoirs of matter. In the case of the IceCube detector, the medium of choice is (surprise) ice. Charged-current interactions of muon neutrinos with the ice produce muons. These muons emit Cherenkov radiation while travelling through the ice, as their speed exceeds the speed of light in this medium. This light is then detected by photomultiplier tubes within the ice. Using the light intensity and timing information, one can reconstruct the direction of origin of the neutrino. The reconstructions of these muon tracks typically result in 90% rectangular uncertainty areas ranging from a few to a few tens of square degrees. Other neutrino interactions produce spherical light patterns in the detector, which do not allow for good angular reconstructions. Recently, the nearby galaxy NGC1068 made the headlines because of a correlation with the full set of astrophysical neutrinos detected by IceCube with an excess of 79 neutrinos with TeV energies at a 4.2 σ level <cit.>. It has also been possible to identify counterparts to single high-energy neutrino alerts. So far, though, only a handful of high-energy neutrinos have been tied to likely sources. The most prominent one was IC170922A, for which the flaring blazar TXS 0506+056 was identified as the likely origin <cit.>. To find more candidate counterparts, ZTF has been operating a systematic optical follow-up program since 2019, targeting the 90% sky localizations of selected high-energy neutrino alerts from IceCube. ZTF is an optical survey telescope located at Mt. Palomar in California, with an average depth of 20.5 mag. It has a very large field of view of 47 sq. deg, dwarfing all other optical survey telescopes, including the future Rubin Observatory. This makes it a great choice for follow-up, as usually one pointing per band is enough to cover the respective uncertainty areas <cit.>. We receive the alerts from IceCube via the low-latency Gamma-ray Coordination Network (GCN). When either the probability of the neutrino of not being atmospheric background is larger than 50% or its uncertainty area is smaller than 10 sq. deg, we follow up the alert, provided the sky location is accessible to ZTF and the uncertainty area is smaller than 40 sq. deg. We react as fast as possible, with 300 second observations (with a typical depth of 21.5 mag) in the first night, followed by shallower 30 second observations during subsequent nights to monitor the time evolution of our candidates. So far, 30% of all alerts have led to observations, with a total number of 32 observational follow-up campaigns as of December 2022. The transients we get from observing are then filtered with <cit.>, our dedicated follow-up pipeline. This pipeline makes heavy use of , a broker and analysis framework designed to facilitate the live and archival processing of large numbers of astronomical transients <cit.>. Considering the vast number of transient alerts generated by ZTF each night, automated filtering is strictly necessary. We cut down the large number of alerts by requiring that candidates a) lie within the 90% localization region of IceCube, b) are detected at least once after the neutrino has been detected, c) have a sufficiently high probability of being real (not a subtraction artefact) and finally d) are likely not a solar system object or star. To increase the quality of the lightcurves and to be sensitive to pre-neutrino activity, we usually obtain forced photometry for candidate counterparts with <cit.>, our forced photometry pipeline. The follow-up program is designed to be sensitive to the following high-energy neutrino production environments: Supernovae with evidence of interaction with their circumstellar medium, supernovae with relativistic jets, gamma-ray bursts, active galactic nucleus (AGN) activity that is correlated to optical flares, and TDEs. After observing and automated filtering, we vet the remaining candidates and trigger subsequent observations, most importantly spectroscopy, to classify them. For more details on the program, see <cit.>. The remains of these proceedings will focus on the two (candidate) TDEs we found in temporal and spatial coincidence with high-energy neutrinos during the operation so far (AT2019dsg and AT2019fdr), as well as a third candidate that emerged during a subsequent analysis (AT2019aalc). § THREE CANDIDATE TIDAL DISRUPTION EVENTS ASSOCIATED WITH HIGH-ENERGY NEUTRINOS §.§ First association: AT2019dsg On October 1, 2019, we triggered follow up to IceCube neutrino IC191001A. Among the candidates selected by our pipeline was the several months old TDE AT2019dsg. As is the case for all ZTF detected TDEs, observations by NASA's Neil Gehrels Swift Observatory <cit.> had already been carried out with the Ultra-Violet/Optical Telescope (UVOT) <cit.>, revealing bright emission in the UV tracing the brightness evolution of the optical wavelengths in the lightcurve. The blackbody temperature inferred from the optical/UV measurements was 10^4.6  K, somewhat hot for a TDE <cit.>. Additionally, observations had been carried out with the X-ray Telescope (XRT) <cit.> aboard Swift, showing soft X-ray emission fading to non-detection within 70 days after discovery. This could either be explained by cooling of the accretion disk or by an obscuration effect. The X-ray detection is not uncommon (9 of the 30 ZTF Phase-I TDEs are detected in X-rays) <cit.>. A more peculiar feature is the radio detection of AT2019dsg. Long-lasting radio emission has been detected by a variety of instruments: The Large Array of the Arcminute MicroKelvin Imager (AMI) <cit.>, MeerKAT <cit.> and the Karl G. Jansky Very Large Array (VLA) <cit.>. These showed temporal evolution over the course of months after the detection of the neutrino. The interpretation of the radio signal has been somewhat disputed, but at the very least it confirms long-lived non-thermal emission <cit.>. Lastly, later analysis of data from the Wide-field Infrared Survey Explorer (WISE) <cit.> showed a prominent infrared (IR) flux increase with respect to epochs prior to the TDE – a feature I will come back to. §.§ Second association: AT2019fdr Half a year later, AT2019fdr was found in coincidence with IceCube neutrino IC200530A. At the time of neutrino detection it was already 10 months old, and its nature somewhat disputed. Within the ZTF collaboration, it was initially classified as a superluminous supernova of type II (SLSN II), though a study of peculiar accretion flares in Narrow-line Seyfert Galaxies (NLSy1) argued for a TDE nature of the flare on the basis of AT2019fdr's long-lived and bright U-band and UV emission, the proximity to the core of its host galaxy as well as emission at the blue end of the Balmer line profiles and the flare's overall longevity <cit.>. To this body of evidence we added a late-time X-ray detection and a bright infrared dust echo, which makes a SLSN II interpretation even less likely (see below). We think that AT2019fdr can be classified as TDE, albeit an unusual one. Its peculiarity should not be surprising, as it is located in an AGN, a somewhat different environment than a quiescent host galaxy. AT2019fdr's lightcurve and spectral energy distributions (SED) during three different epochs can be seen in Fig. <ref>. Each the optical/UV and the infrared part of the SED are well approximated by a blackbody. Note that AT2019fdr is one of the most luminous transients ever discovered: From the optical/UV blackbody one can infer a total bolometric energy of at least 10^52  erg (after fitting for extinction). Fig. <ref> also shows a late-time detection by eROSITA <cit.> aboard Spektrum Roentgen-Gamma (SRG) <cit.>. This detection displayed a very soft thermal spectrum with a blackbody temperature of only 56^+32_-26  eV. As X-ray emission is very unusual for SLSNe, we count this as further evidence for AT2019fdr being a TDE. We complemented the mid-infrared WISE measurements with near-infrared observations with the Wide Field Infrared Camera (WIRC) <cit.> mounted on the P200 telescope at Mt. Palomar. These comprise a blackbody, which together with the delayed evolution of the IR signal with respect to the optical flare strongly suggests that we see a dust echo. This is the reprocessing of the flare by a surrounding dust region that gets hot and begins to emit thermally after a delay caused by the light travel time: Light along the line of sight will arrive simultaneous to the optical signal, but light perpendicular to it will arrive later. Finally, light from the back of the system will arrive. A fitted dust echo and a sketch illustrating the dust system can be seen in Fig. <ref>. From this fit and by comparing the optical/UV and IR luminosities, one can infer the distance of the dust shell to the core (0.16  pc) and a covering factor of 1/3. This value is rather large compared to TDEs in quiescent galaxies, but in good agreement with comparable events <cit.>. The discovery of this dust echo, alongside the large energy budget and slow temporal evolution, suggests that AT2019fdr belongs to a class of TDE candidates occurring in AGN; an especially violent environment which readily explains the amounts of dust required for such a luminous dust echo with a high covering factor. It can also be counted as evidence for the TDE interpretation, as the amount of dust intrinsically produced by SNe is fairly limited, which is hard to reconcile with the covering factor observed for AT2019fdr <cit.>. Furthermore, radio measurements obtained with VLA over multiple epochs showed a consistent signal. However, as it did not display signs of time evolution, the radio emission most likely stems from the underlying AGN. The combined chance coincidence for finding AT2019dsg and AT2019fdr in coincidence with high-energy neutrino alerts is 0.03%. Note that a stacking analysis from 2019 has constrained the contribution of TDEs to the observed diffuse IceCube neutrino flux to ≤ 30% <cit.>. If one accounts for TDEs and candidate TDEs like AT2019fdr, at least 7.8% of astrophysical IceCube high-energy neutrinos would come from this broader population <cit.>. §.§ Third association (from dust echo study): AT2019aalc The discovery of two prominent dust echoes prompted a search for other optical flares accompanied by a WISE-detected dust echo in coincidence with high-energy neutrinos. A third candidate counterpart emerged from this search, which was previously missed as its location was not observable by ZTF at the time of neutrino detection: AT2019aalc, probable counterpart to neutrino IC191119A. Much less is known about this event as it flew under the radar for quite a while. It was discarded as probable AGN activity during scanning and was not scrutinized further until it emerged as neutrino counterpart candidate months later. In AT2019aalc's case, the neutrino arrived about 5 months after the optical peak. The lightcurve is consistent with this event also being a TDE, though this classification is much less secure compared to AT2019fdr, as there are no spectra or UV measurements available. The peak g-band luminosity of AT2019aalc is comparable to AT2019dsg, and its dust echo luminosity is the largest of all three events – although its dust echo strength as defined in <cit.> is lower due to the pre-flare variability. Interestingly, it was also detected by SRG/eROSITA on a regular visit, displaying a fairly soft spectrum with a blackbody temperature of 172±10  eV. This can be counted as evidence in favor of a TDE interpretation. There was also an archival radio detection by the VLA Sky-Survey (VLASS) <cit.>. The significance of finding three events in spatial and temporal coincidence with a high-energy neutrino is 3.7σ <cit.>. Note that the soft X-ray emission detected for all transients was not part of the initial selection criteria. § COMPARISON A comparison of the relevant measurements and inferred quantities for the three events is shown in Table <ref>. All three events have in common that the neutrino was emitted significantly later than the optical peak (with a delay of 5-10 months). Furthermore, all were detected in X-rays with fairly low temperature, and all showed prominent dust echoes. One can try to draw conclusions for production scenarios from these facts. The observed delay might either be a purely statistical effect, but could also carry physical meaning. In <cit.> this is explained by assuming that the debris first needs to circularize before the first neutrinos can be produced. In <cit.>, the observed neutrino delay is combined with the dust echo present in all three candidate counterparts and which is peaking around the time of neutrino detection in all three cases (see Fig. <ref>). In one of the models presented, the infrared dust-echo photons serve as the target for accelerated protons. In this scenario the neutrino delay arises naturally from the delayed IR emission. The energies involved would render TDEs interesting candidates for Ultra-High Energy Cosmic Ray (UHECR) emission. This model comes at the cost of requiring very high proton energies. A companion model proposing X-ray target photons, which are also available for all three sources, can explain the neutrino delay too. In this case, the delay arises from the confinement of protons with moderate energy. This model explains the observed neutrino energies better, at the cost of describing the observed neutrino delay less good than the IR model. A third model uses the optical/UV emission as target (similar to <cit.>). This has the advantage of yielding the highest neutrino production efficiency of all three models, but fails to describe the observed neutrino time delay. To decide if one of these models is correctly describing the possible neutrino production mechanisms in TDEs will require more candidate counterparts to test the predictions <cit.>. § SUMMARY AND OUTLOOK So far, three accretion flares have been found in temporal and spatial coincidence with high-energy neutrinos detected by IceCube. One of them was a bona fide TDE (AT2019dsg) and two were candidate TDEs (AT2019fdr and AT2019aalc). In all three cases, the detected neutrino was delayed with respect to the optical peak and an X-ray signal with a soft spectrum was detected. All three optical flares were accompanied by a large infrared echo pointing at significant amounts of dust surrounding the black hole. How can we be sure that these associations between the neutrinos and the events hold? The answer is easy: Continue the systematic follow-up program, and see what happens. If we can make more associations, the chance coincidences will keep dropping. If nothing new turns up, then that is just the way the universe works. AT2019fdr is interesting nevertheless, as its sheer brightness makes it highly unusual. This event is also remarkably long-lived, as we are still detecting it as of December 2022, over three years after the optical peak. Furthermore, it recently showed signs of optical rebrightening. There is still a lot to be learned about TDEs in AGN. JHEP
http://arxiv.org/abs/2307.01573v1
20230704085938
Collider physics with no PDFs
[ "Tuomas Lappi", "Heikki Mäntysaari", "Hannu Paukkunen", "Mirja Tevio" ]
hep-ph
[ "hep-ph", "nucl-th" ]
#1 #1 #1 #1 #1 #1 and #1 Submitted to #1 Abstract Presented PRESENTED AT Collider physics with no PDFs Tuomas Lappi, Heikki Mäntysaari, Hannu Paukkunen, and Mirja Tevio Department of Physics, University of Jyvaskyla, P.O. Box 35, 40014 University of Jyvaskyla, Finland Helsinki Institute of Physics, P.O. Box 64, 00014 University of Helsinki, Finland Measurements of Deep Inelastic Scattering (DIS) provide a powerful tool to probe the fundamental structure of protons and other nuclei. The DIS cross sections can be expressed in terms of structure functions which are conventionally expressed in terms of parton distribution functions (PDFs) that obey the DGLAP evolution equations. However, it is also possible to formulate the DGLAP evolution directly in terms of measurable DIS structure functions entirely sidestepping the need for introducing PDFs. We call this as the physical-basis approach. In a global analysis one would thereby directly parametrize the (observable) structure functions – not the (unobservable) PDFs. Ideally, with data constraints at fixed Q^2, the initial condition for the evolution would be the same at each perturbative order (unlike for PDFs) and the approach thus provides a more clean test of the QCD dynamics. We first study a physical basis consisting of the structure functions F_2 and F_ L in the fixed-flavour number scheme to the leading non-zero order in α_s. We show how to express the quark singlet and gluon PDFs in terms of F_2 and F_ L directly in momentum space which then leads to the DGLAP evolution of the structure functions F_2 and F_ L. In the second step we expand the physical basis to include six independent structure functions, which allows for a consistent global analysis. The steps towards NLO accuracy and the variable-flavour-number scheme are outlined. At NLO accuracy (when the scheme dependence of PDFs starts to play a part), we can take advatage of the physical basis and express e.g. the Drell-Yan cross sections at the LHC directly in terms of measurable DIS structure functions and thus without the scheme dependence. DIS2023: XXX International Workshop on Deep-Inelastic Scattering and Related Subjects, Michigan State University, USA, 27-31 March 2023 < g r a p h i c s > § INTRODUCTION page1 As an alternative to conventional Dokshitzer-Gribov-Lipatov-Altarelli-Parisi (DGLAP) evolution for parton distribution functions (PDFs), one can also formulate the scale evolution directly for observable quantities such as the deep inelastic scattering (DIS) structure functions. The advantage of this approach is that then there is no need to define scheme-dependent renormalized parton distribution functions when going beyond the leading-order accuracy. In this work we will refer to this as the physical basis. The existence of a physical basis follows from the fact that, as we explicitly demonstrate in this work, one can find a one-to-one mapping between the parton distribution functions and the DIS structure functions. Therefore the parton distribution functions satisfying the DGLAP equation can also be expressed in terms of the DIS observables. By introducing the DGLAP evolution in a physical basis one could – in principle – fix the initial condition for the evolution directly by data at fixed virtuality Q^2, and the intial condition would be the same to all orders in perturbation theory. In practice, however, the kinematic coverage of the available data is limited, and global fitting similar as in the case of PDFs will be necessary. Conseptually physical basis is not a new idea, and it has been discussed for example in Ref. <cit.>. The novelty of our work <cit.> is that, instead of studying a case specific physical basis, we construct a full dimension-six physical basis of structure functions which corresponds to PDFs with three active parton flavors. We use the first non-zero order in the running strong coupling for and and solve the evolution equations directly in the momentum space. By formulating the final result in the momentum space we are able to analytically define the DGLAP evolution in physical basis in terms of observable structure functions only. § EVOLUTION IN A TWO-OBSERVABLE PHYSICAL BASIS We first consider an illustrative approach to the physical basis where we have only two independent observables. In this case our physical basis consists of the DIS structure functions and , taking into account only the massless quark singlet, Σ(x, μ_f^2) = ∑_q[q(x, μ_f^2)+q(x, μ_f^2)], and the gluon PDF g(x, μ_f^2), where μ_f is the factorization scale. We work in the first non-zero order in , meaning that ∼^0 and ∼^1. The structure functions and can be written as 1/F_2(x, Q^2)/x = C_F_2Σ^(0)⊗Σ(x, μ_f^2) , 1/F_ L(x, Q^2)/x = (μ^2_r)/2π C_F_ LΣ^(1)⊗Σ(x, μ_f^2) + 2n_f (μ^2_r)/2π C_F_ Lg^(1)⊗ g(x, μ_f^2) , where at the first non-zero order the coefficient functions are C_F_2Σ^(0)(z) = δ(1-z) , ^(1)(z) = 2 z , and ^(1)(z) = 4 z(1-z) . Here we used N_c=C_A=3, =(^2-1)/(2), and =1/2. In this work n_f=3 is the number of massless flavours and is the average quark charge, ≡1/∑_q e_q^2 , where e_q denotes the electric charge of quark q. The aim in this work is to write the Q^2 evolution equations for structure functions directly for the structure functions and . This requires us to first invert Eqs. (<ref>) and (<ref>) such that the singlet and gluon PDF can be expressed in terms of and . This results in Σ(x,μ_f^2) = 1/ , g(x, μ_f^2) = 1/( _g⊗ +_g⊗ +_g⊗+_g⊗+_g⊗) , where ≡(x, Q^2)/x ≡2π/(μ_r^2)(x, Q^2)/x F'_2,L ≡ x/xF_2,L(x, Q^2) ≡ x^2[2]/x^2(x, Q^2) . The coefficient functions _g, _g, _g, _g, and _g are listed in Ref. <cit.>. When we take Q^2 derivatives of Eqs. (<ref>) and (<ref>), and then use Eqs. (<ref>) and (<ref>), arrive with the evolution equations for and , /log Q^2[ F_2(x, Q^2)/x] = (Q^2)/2π[ C_F_2Σ^(0)⊗ P_qq⊗ + 2 C_F_2Σ^(0)⊗ P_qg⊗( _g⊗ +_g⊗ +_g⊗+_g⊗+_g⊗) ] , /log Q^2[ 2π/(Q^2)F_ L(x, Q^2)/x] = ((Q^2)/2π) [ C_F_ LΣ^(1)⊗ P_qq + 2n_f C_F_ Lg^(1)⊗ P_ gq] ⊗ + 2 ((Q^2)/2π) [ C_F_ LΣ^(1)⊗ P_qg + C_F_ Lg^(1)⊗ P_ gg] ⊗( _g⊗ +_g⊗ +_g⊗+_g⊗+_g⊗) , where we have set the renormalization scale to be μ_r^2 = Q^2. Here P_ qq, P_ qg, P_ gg, and P_ gq are the LO splitting functions listed in Ref. <cit.>. These equations include double convolutions which can, however, be analytically reduced to a single one, see Ref. <cit.>. § EVOLUTION OF A SIX-OBSERVABLE PHYSICAL BASIS In this section we repeat the same steps as in Sec. <ref>, but now we consider a more complete setup by distinguishing between the light quark flavors. We still include only the light quark flavors and continue to work at the first non-zero order in . We separate the quark distributions u ≠u, d≠d, but keep s=s to limit the number of observables needed in the physical basis. Including the gluon distribution, we have in total six PDFs. In order to express the PDFs in terms of physical observables, we first need to collect a set of six linearly independent DIS structure functions. From neutral current DIS we choose the structure functions , and . From charged current DIS we choose , , and corresponding to the W^--boson exchange. In the first non-zero order in structure functions , , , and are expressed in terms of PDFs as <cit.> ([ ; ; ; ; ]) = ( [ x x x x 2x; 2(L^2_d-R_d^2) -2(L^2_d-R_d^2) 2(L^2_u-R_u^2) -2(L^2_u-R_u^2) 0; 0 2x 2x 0 2x; 0 -2 2 0 -2; 0 0 0 0 2x ]) ( [ d; d; u; u; s ]) . Here, L_q = T_q^3-2e_qsin^2 θ_W and R_q =-2e_qsin^2 θ_W, where θ_W denotes the Weinberg angle and T_q^3 is the third component of the weak isospin. Now the structure function reads (x, Q^2) = (Q^2)/2πx[^(1)⊗(Q^2) +2^(1)⊗ g(Q^2)] , where the coefficient functions ^(1) =^(1) and ^(1) were defined in Eq. (<ref>). The expressions for PDFs in terms of the structure functions are listed in Ref. <cit.>. As in previous section, we can relate the DGLAP evolution of the structure functions to the DGLAP evolution of the quark and antiquark PDFs. By taking Q^2 derivatives of the structure functions defined in Eq. (<ref>) we arrive at DGLAP evolutions for structure functions , , , , , and , which form a six dimensional physical basis. The obtained Q^2 dependencies of , , , and in the six-observable physical basis are shown in Fig. <ref>. Initial conditions for the physical basis evolution are computed at Q^2 = 2.0 GeV^2 using Eqs. (<ref>) and (<ref>) with the CTEQ ( <cit.>) set of LO PDFs. As expected, within the numerical accuracy, the Q^2 dependencies are found to match with the values obtained by computing the structure functions directly from Eqs. (<ref>) and (<ref>) using DGLAP-evolved PDFs. The discrepancies for around x=10^-8 are presumably due to numerical noise. The overall excellent agreement validates the obtained evolution in the physical basis. § SUMMARY We have shown how the DGLAP evolution can be directly formulated in terms of observable DIS structure functions in the case of three light quarks, first non-zero order in . We first considered a toy model with only the light quark singlet and the gluon PDF, and constructed a physical basis with only structure functions and . Then we proceeded to study a more complete case by considering the quark PDFs u, u, d, d, and s=s together with the gluon PDF. We constructed a corresponding physical basis with six observables , , , , , and . We also confirmed numerically that the results obtained by performing the DGLAP evolution in the physical basis result in the same Q^2 evolution as in the conventional approach with DGLAP-evolved PDFs. In future work we will expand the perturbative order to reach the second non-zero order in . At that order the we can fully exploit the advantage of the physical basis advocated in this work, as it becomes possible to avoid the scheme dependence which otherwise manifests itself at NLO. In the future, we also intend to extend the procedure discussed in this work to cover the heavy quark flavors, and thus obtain a physical basis with more degrees of freedom. § ACKNOWLEDGEMENTS This work was supported under the European Union’s Horizon 2020 research and innovation programme by the European Research Council (ERC, grant agreement No. ERC-2018-ADG-835105 YoctoLHC) and by the STRONG-2020 project (grant agreement No. 824093). This work was also supported by the Academy of Finland, the Centre of Excellence in Quark Matter (projects 346324 and 346326), projects 321840 (T.L, M.T), project 308301 (H.P., M.T), and projects 338263 and 346567 (H.M).Views and opinions expressed are however those of the authors only and do not necessarily reflect those of the European Union or the European Research Council Executive Agency. Neither the European Union nor the granting authority can be held responsible for them. JHEP-2modlong
http://arxiv.org/abs/2307.03329v1
20230706231349
Facial Landmark Detection Evaluation on MOBIO Database
[ "Na Zhang" ]
cs.CV
[ "cs.CV" ]
Facial Landmark Detection Evaluation on MOBIO Database Na Zhang Na Zhang is with Lane Department of Computer Science and Electrical Engineering at West Virginia University, Morgantown, WV 26506-6109. ====================================================================================================================================================== MOBIO is a bi-modal database that was captured almost exclusively on mobile phones. It aims to improve research into deploying biometric techniques to mobile devices. Research has been shown that face and speaker recognition can be performed in a mobile environment. Facial landmark localization aims at finding the coordinates of a set of pre-defined key points for 2D face images. A facial landmark usually has specific semantic meaning, e.g. nose tip or eye centre, which provides rich geometric information for other face analysis tasks such as face recognition, emotion estimation and 3D face reconstruction. Pretty much facial landmark detection methods adopt still face databases, such as 300W, AFW, AFLW, or COFW, for evaluation, but seldomly use mobile data. Our work is first to perform facial landmark detection evaluation on the mobile still data, i.e., face images from MOBIO database. About 20,600 face images have been extracted from this audio-visual database and manually labeled with 22 landmarks as the groundtruth. Several state-of-the-art facial landmark detection methods are adopted to evaluate their performance on these data. The result shows that the data from MOBIO database is pretty challenging. This database can be a new challenging one for facial landmark detection evaluation. Facial landmark detection, detection performance, deep learning § INTRODUCTION The mobile biometrics database, MOBIO <cit.>, is an audio-visual database captured almost exclusively using mobile phones. It is taken from 152 persons with 100 males and 52 females, and collected from August 2008 until July 2010 in six different sites from five different countries with both native and non-native English speakers. This mobile phone database consists of over 61 hours of audio-visual data with 12 distinct sessions usually separated by several weeks. One special point is that the acquisition device is given to the user, rather than being in a fixed position, which makes this database unique and now being used in an interactive and uncontrolled manner. The MOBIO database provides a challenging test-bed for face verification, speaker verification, and bi-modal verification. Facial landmark detection, also known as face alignment or facial landmark localization, is a mature field of research. In recent years, facial landmark detection has become a vary active area, due to its importance to a variety of image and video-based face analysis systems, such as face recognition <cit.>, facial expression analysis <cit.>, human-computer interaction, video games and 3D face reconstruction <cit.>. Hence, accurate face landmarking and facial feature detection is an important intermediary step for many subsequent face processing operations that range from biometric recognition to the understanding of mental states, which have an impact on subsequent tasks focused on the face, such as coding, face recognition, expression and/or gesture understanding, gaze detection, animation, face tracking, etc. Since face alignment is essential to many face applications, the requirement for the efficiency of facial landmark detection becomes higher and higher, especially when more face images and videos captured in the wild appear. Hence, the large visual variations of faces, such as occlusions, large pose variations and extreme lightings, impose great challenges for face alignment in real world applications. For facial landmark detection evaluation, still face images, like 300W, AFW, AFLW, are universally used. However, mobile face data, e.g., the MOBIO database, is seldomly adopted for facial landmark evaluation so far. In this work, we try to perform facial landmark detection on the mobile still face data using up-to-date methods, and check their performance on this type of faces. A total of 20,600 still face images are extracted from MOBIO database and labelled manually with 22 facial feature points as groundtruth. Seven state-of-the-art facial landmark localization methods are chosen to perform facial landmark detection on these face images. And several measure metrics, e.g., Normalized Mean Error (NME), Cumulative Error Distribution (CED), Area-Under-the-Curve (AUC) and failure rate, are calculated or drawn for evaluation. The experimental result shows that these mobile still face images are pretty challenging for existing facial landmark detection technology and could be a new database for facial landmark localization. This evaluation could establish baseline performance for the MOBIO mobile face images. The contributions of our work includes: * generate a still mobile face database with a total of 20,600 images based on video-visual database MOBIO with 22 manually labelled facial landmarks as groundtruth; * adopt seven state-of-the-art facial landmark detection methods to evaluate their performance on these 20,600 face images; * the result shows that the mobile faces in MOBIO is pretty challenging which can be used as a new database for facial landmark detection evaluation in mobile condition. This paper is organized as follows. In section <ref>, we briefly describe facial landmark detection technique. In section <ref>, the still faces based on MOBIO are generated and labeled with 22 facial landmarks. Our approach procedure is given in section <ref>. And experimental results are provided in section <ref>. In section <ref>, some interesting discussion and conclusions are drawn. § FACIAL LANDMARK DETECTION In this section, we talk about some basic information about facial landmark detection, the challenges it faces, several categories of detection methods, types of facial points and measure metrics. §.§ What is Facial Landmark Detection? Facial landmark detection, or facial landmark localization, or face alignment, is to automatically localize a set of pre-defined semantic key points including eyes, nose, mouth and other points on the 2D face images. A facial landmark usually has specific semantic meaning, e.g. nose tip or eye center, which provides rich geometric information for other face analysis tasks such as face recognition <cit.>, emotion estimation <cit.> and 3D face reconstruction <cit.>. It is a fundamental problem in computer vision study and an essential initial step for a number of research areas, and plays a key role in many face processing applications, including head pose estimation <cit.>, facial expression analysis and emotion recognition <cit.>, face attribute analysis <cit.>, face alignment in 2D <cit.> and 3D (e.g., frontalization <cit.>, face 3D modeling, video games, multimodal sentiment analysis <cit.>, person identification, and, of course, face recognition (see, e.g., Sun et al. <cit.> and many others). §.§ What is the Challenge? Due to its relevance to many facial analysis tasks, facial landmark detection has attracted increasing interests in the past couple of years. It is a well-researched problem with large amounts of annotated data, and impressive progress has been made too. Current methods could provide reliable results for near-frontal face images <cit.>. Thanks to the successive developments in this area of research during the past decades, facial landmark localization can be performed very accurately in constrained scenarios, even using traditional approaches such as Active Shape Model (ASM) <cit.>, Active Appearance Model (AAM) <cit.> and Constrained Local Model (CLM) <cit.>. As the rapid development of deep learning technology, facial landmark detection gains a pretty good performance in unconstrained environment. Though great strides have been made in this field, facial landmark detection is particularly daunting considering the real-world, unconstrained imaging conditions. In an uncontrolled setting, face is likely to have large out-of-plane tilting, occlusion, illumination and expression variations. Robust facial landmark detection remains a formidable challenge in the presence of partial occlusion and large head pose variations. Images often portray faces in myriads of poses, expressions, occlusions and more, any of which can affect landmark appearances, locations or even presence. Therefore, it is still a challenging problem for localizing landmarks in face images with partial occlusions or large appearance variations due to illumination conditions, poses, and expression changes. §.§ Categories of Methods In general, existing facial landmark detection methods can be divided into two categories: (1) traditional approaches, e.g., ASM <cit.> and AAM <cit.> based methods, which fit a generative model by global facial appearance; (2) cascade regression based methods, which try to estimate the facial landmark positions by a sequence of regression models. In recent year, deep learning based cascade regression models have performed robust facial landmark localization using deep neural networks. ASM and AAM based methods. This kind of methods is traditional approaches, which usually perform accurately in constrained scenarios. They rely on a generative PCA-based shape model. However, these methods require expensive iterative steps and rely on good initialization. The mean shape is often used as the initialization, which may be far from the target position and hence inaccurate. Cascade regression based methods. In cascade regression framework, a set of weak regressors are cascaded to form a strong regressor <cit.>. It tries to obtain the coarse location first, and the following steps are to refine the initial estimate, yielding more accurate results. Cascade regression directly positions facial landmarks on their optimal locations based on image features. The shape update is achieved in a discriminative way by constructing a mapping function from robust shape related local features to shape updates. However, these methods need to train individual systems for each group of the landmarks, the computational burden grows proportional to the group numbers and cascade levels. For example, the cascaded Convolutional Neural Network (CNN) method <cit.> needs to train 23 individual CNN networks. However, the capability of cascaded regression is nearly saturated due to its shallow structure. After cascading more than four or five weak regressors, the performance of cascaded regression is hard to improve further <cit.>. More recently, as deep neural networks have been put forward as a more powerful alternative in a wide range of computer vision and pattern recognition tasks, facial landmark localization gains large development too. Different network types have been explored, such as Convolutional Neural Network (CNN), Auto-Encoder Network and Recurrent Neural Network, to perform robust facial landmark localization. In our work, most of methods adopted belong to deep learning based models, such as Wing loss based method WingLoss <cit.>. §.§ Types of Facial Landmarks Existing facial landmark detection methods can figure out different numbers of facial feature points, e.g., 5, 6, 16, 19, 21, 22, 29, 49, 68, etc. Figure <ref> gives several typical facial landmarks. Figure <ref>(a) consists five feature points(i.e., left eye center, right eye center, nose tip, left mouth corner, and right mouth corner). Figure <ref>(b) is a face with six points with one more landmark, mouth center, than Figure <ref>. Besides feature points of eye area, nose, and mouth, Figure <ref>(c) considers five face contour landmarks. Figure <ref>(d) and (e) share similar landmarks, the only difference is Figure <ref>(e) have two extra points on two ears. Figure <ref>(f) (i) provide more points to describe geometric information of face. §.§ Landmark Performance and Evaluation Metrics There are two different metrics to evaluate landmark detection performance, task-oriented performance and ground-truth based localization performance. For task-oriented performance, one can measure the impact of the landmark detection accuracy on the performance scores of a task. For ground-truth based localization performance, a straightforward way is to use manually annotated ground-truths. In practice, ground-truth based localization performance is commonly used in facial landmark detection for thorough analysis. If the ground-truth positions are available, the localization performance can be expressed in terms of the Normalized Mean Error (NME), Cumulative Error Distribution (CED) curve, Area-Under-the-Curve (AUC) and failure rate. Normalized Mean Error (NME) is a primary metric in facial landmark detection evaluation. It is calculated first by the distances between the estimated landmarks and the groundtruths, and then normalized with respect to the inter-ocular distance, i.e. Euclidean distance between two eye centres, the distance of outer or inner corners of the eyes. The mean error has three types: landmark-wise, sample-wise or overall. Landmark-wise face alignment error is first normalized in the following way to make it scale invariant: e_x = x̂-x^GT/D_IOD where x̂-x^GT is the Euclidean distance between the estimated location x̂ and the true location x^GT. D_IOD is the inter-ocular distance (IOD). Normalizing landmark localization errors by dividing with IOD makes the performance measure independent of the actual face size or the camera zoom factor. Sample-wise mean error could be calculated as 1/n∑^n_i=1x̂_̂î - x_i^GT/D_IOD, where n is the number of facial landmarks involved in the evaluation. The error is normalized by the distance of outer or inner corners of the eyes. NME can be averaged over all the landmarks to produce a global precision figure, which is a overall mean error. In recent years, with the rapid progress of face alignment, most of the recent approaches report a error level of e at around 0.05 or smaller, which is close to human performance. Using NME is very straightforward and intuitive given its single value form. However, this measure is heavily impacted by the presence of some big failures such as outliers, in particular when the average error level is very low. In other words, the mean error measure is very fragile even if there are just a few images with big errors. Thus though the mean error is widely used for face alignment evaluation <cit.>, it does not provide a big picture on which cases the errors occur, e.g., minor big alignment error, many inaccuracies. Since using overall mean error as an evaluation criterion is too sensitive to big erroneous samples, Cumulative Error Distribution (CED) curve and Area-Under-the-Curve (AUC) are adopted as two better metrics (see Figure <ref>). CED curve is the cumulative distribution function of the normalized error as shown by the blue line in Figure <ref>. The x-axis is error value, and y-axis is the fraction of test faces. In terms of outliers handling, CED is a better way. However, it is not intuitive given its curve representation. It is also hard to use it in sensitivity analysis. Therefore, AUC is adopted. It is calculated from the CED curve. The AUC stands for the value of area under the curve of CED. It is defined as: AUC_α=∫^α_0 f(e)de where e is the normalized error, f(e) is the cumulative error distribution function and α is the upper bound that is used to calculate the definite integration. In Figure <ref>, α is 0.06 (red line). Given the definition of the CED function, the value is AUC_α lies in the range of [0, α], the area with yellow titled lines. The value of AUC_α will not be influenced by points with error bigger than α. Landmark detection statistics can be characterized by the exceedance probability of the localization error. A general agreement in the literature is that e < 0.1 is an acceptable error criterion so that a landmark is considered detected whenever it is found within proximity of one tenth of the inter-ocular distance from its true position. In our experiment, α is set to 0.08 and 0.1. Failure rate is calculated with the threshold for the normalized mean error. It computes the fraction of test faces that the error value of which is larger than the threshold. In our experiment, the thresholds are set to 0.1 and 0.08. §.§ Common Databases Used Annotated databases are extremely important in computer vision. Therefore, a number of databases containing faces with different facial expressions, poses, illumination and occlusion variations have been collected in the past. Most evaluation experiments are conducted on commonly used benchmark datasets, such as the 300 Faces in the Wild (300W) <cit.>, Annotated Facial Landmarks in the Wild (AFLW) <cit.>, the Annotated Faces in-the-wild (AFW) <cit.>, the Labeled Face Parts in-the-wild (LFPW) <cit.>, HELEN <cit.>, the Caltech Occluded Faces in the Wild (COFW) <cit.>. The aforementioned databases, cover large variations including: different subjects, poses, illumination, occlusion, etc. The 300 Faces in the Wild (300W) <cit.> dataset is a commonly used benchmark for facial landmark localization problem. It contains near-frontal face images in the wild and provides 68 semi-automatically annotated points for each face. It is created from existing datasets, including LFPW <cit.>, AFW <cit.>, HELEN <cit.>, XM2VTS <cit.> and IBUG <cit.>. The 300W training set contains 3,148 training images from AFW, LFPW and HELEN. The common subset of 300W contains 554 test images from LFPW and HELEN. The challenging subset of 300W contains 135 test images from IBUG. The fullset of 300W is the union of the common and challenging subset. The 300W test set contains 600 test images which are provided officially by the 300W competition <cit.> and said to have a similar distribution to the IBUG dataset. IBUG subset is extremely challenging due to the large variations in face pose, expression and illumination. As the rapid progresses of the study in facial landmark localization in recent years, several methods have reported close-to-human performance on this dataset of which the images that are acquired from unconstrained environments. Figure <ref> shows annotated (a) indoor and (b) outdoor images of 300W. The Annotated Faces in-the-wild (AFW) <cit.> database is a popular benchmark for facial landmark detection, containing 205 images with 468 faces. A detection bounding box as well as up to 6 visible landmarks are provided for each face. An example of an image taken from the AFW database along with the corresponding annotated landmarks is depicted in Figure <ref> (c). The Annotated Facial Landmarks in-the-wild (AFLW) <cit.> is a very challenging dataset that has been widely used for benchmarking facial landmark localization algorithms. It contains 25,000 images of 24,686 subjects downloaded from Flickr. The images contain a wide range of natural face poses in yaw (from -90 to 90) and occlusions. Facial landmark annotations are available for the whole database. Each annotation consists of 21 landmark points. The AFLW-Full protocol contains 20,000 training and 4,386 test images, and each image has 19 manually annotated facial landmarks. Figure <ref> (f) depicts an annotated image from AFLW. The COFW <cit.> dataset contains in-the-wild face images with heavy occlusions, including 1,345 face images for training and 507 face images for testing. For each face, 29 landmarks and the corresponding occlusion states are annotated in the COFW dataset. An example of an image taken from the COFW database along with the corresponding annotated landmarks is depicted in Figure <ref> (g). The Labeled Face Parts in-the-wild (LFPW) <cit.> database contains 1,432 images downloaded from google.com, fickr.com, and yahoo.com. The images contain large variations including pose, expression, illumination and occlusion. The provided ground truth consists of 29 landmark points. An example of an image taken from the LFPW database along with the corresponding annotated landmarks is depicted in Figure <ref> (d). The HELEN <cit.> database consists of 2,330 annotated images collected from the Flickr. The images are of high resolution containing faces of size sometimes greater than 500*500 pixels. The provided annotations are very detailed and contain 194 landmark points. Figure <ref> (e) depicts an annotated image from HELEN. § FACE IMAGES BASED ON MOBIO DATABASE The MOBIO is an audio-video database of human faces and voice captured almost exclusively on mobile phones. This database is originally used to evaluate the performance of face and speaker recognition in the context of a mobile environment <cit.>. So far, facial landmark detection technique has been seldomly evaluated in the context of a mobile environment. So it is meaningful to perform facial landmark detection on the mobile faces. The mobile environment was chosen as it provides a realistic and challenging test-bed for face points detection techniques to operate. For instance, the environment is quite complex and there is limited control over the illumination conditions and the pose of the subject for the video. In our work, we extract still face frames from the MOBIO videos and generate a face images database. This section briefly describes the MOBIO database first, and then introduces the face images extracted from the MOBIO video data, finally talks about how to generate the groundtruth of the faces with 22 feature points. §.§ MOBIO Database MOBIO <cit.> database is an unique diverse bi-modal database (audio + video) that was captured almost exclusively on mobile phones. It consists of over 61 hours of audio-visual data with 12 distinct sessions usually separated by several weeks. There are a total of 192 unique audio-video samples for each of the 152 participants. Female-Male ratio is 1:2. This data was captured at 6 different sites over one and a half years with people speaking English. Capturing the data on mobile phones makes this database unique because the acquisition device is given to the user, rather than being in a fixed position. This means that the microphone and video camera are no longer fixed and are now being used in an interactive and uncontrolled manner. This database was captured almost exclusively using mobile phones and aims to improve research into deploying biometrics techniques to mobile devices. The database was acquired primarily on mobile phones. 12 sessions were captured for each participant. 6 sessions for Phase I and 6 sessions for Phase II. In Phase I, the participants are asked to answer a set of questions which are classified as set responses, read speech from a paper, and free speech. Each session consists of 21 questions: 5 pre-defined set response questions, 1 read speech question and 15 free speech questions. Phase II consists of 11 questions with the question types ranging from short response questions, set speech, and free speech. All videos are recorded using two mobile devices: one mobile phone (NOKIA N93i) and one laptop computer (standard 2008 MacBook). The laptop was only used to capture part of the first session. The first session consists of data captured on both the laptop and the mobile phone. The publicly-available mobile phone database MOBIO (Source download link: https://www.idiap.ch/dataset/mobio) presents several challenges, including: (1) high variability of pose and illumination conditions, even during recordings, (2) high variability in the quality of speech, and (3) variability in the acquisition environments in terms of acoustics as well as illumination and background. §.§ Extracted Face Images and Facial Landmark Groundtruth Based on the video data in MOBIO, a few face frames are extracted for each subject. A total of 20,600 still face images with size of 640*480 are generated finally. The average number of images for each subject is about 136. Figure <ref> gives several face samples. Since all images are captured in unconstrained conditions, it contains big variations in head pose, illumination, occlusion (e.g., hair, glass), which makes it a challenging database. Much work have been done to generate the groundtruth of these face images via manually labeling 22 facial feature points. Figure <ref> shows the 22 facial landmarks of the face, including 4 points describing brow (left brow left corner, left brow right corner, right brow left corner, right brow right corner), 10 points describing eyes (left eye left corner, left eye top center, left eye right corner, left eye bottom center, left eye center, right eye left corner, right eye top center, right eye right corner, right eye bottom center, right eye center), three points describing nose (nose tip, nose left, nose right), and five points describing mouth (mouth left corner, mouth upper lip center, mouth right corner, mouth bottom lip center, and mouth center). In order to label these faces conveniently and efficiently, a labeling tool named 'Face Label App' as shown in Figure <ref> was developed, which can run on Windows system. The users need to load face images first, and then click the 22 facial feature points on each face in a pre-defined order. The app can automatically capture the position (with x, y values) of each facial landmark when mouse moves on it and clicked. All facial landmark position information are saved in .txt files. §.§ Preprocess Mobile Still Face Data Some facial landmark detection methods are able to handle faces with any sizes, like DAC-CSR <cit.>, PA-CNN Model <cit.>, CE-CLM <cit.>, etc. However, some detection methods, such as Tweaked CNN <cit.>, WingLoss <cit.>, and ECT <cit.>, require that their input must be square faces with fixed size (e.g., 256*256). Hence, face detection, cropping and resizing are executed to preprocess the faces for these methods. MTCNN <cit.> model which is a pretty good face detector is adopted in our work for face detecting. Based on the bounding box of faces, all detected faces are cropped into square shape and then resized to fixed size. § OUR APPROACH In this section, we choose several facial landmark detection methods (Tweaked CNN <cit.>, PA-CNN Model <cit.>, WingLoss <cit.>, CE-CLM <cit.>, ECT <cit.>, TCDCN <cit.>, DAC-CSR <cit.>) to perform face alignment task and analyze their performance on the mobile still faces and other commonly used face databases like 300W, AFW, AFLW, COFW. Normalized mean error (NME), Cumulative Error Distribution curve (CED), Area Under the error Curve (AUC) and failure rate are adopted as our measure metrics for evaluation. §.§ Facial Landmark Detection Methods We choose seven facial landmark detection methods to detect face feature points on mobile still face images. They are Tweaked CNN <cit.>, WingLoss <cit.>, DAC-CSR <cit.>, PA-CNN Model <cit.>, OpenPose <cit.>, ECT <cit.>, and TCDCN <cit.>. Most of them are deep learning based methods. Among these deep learning methods, Tweaked CNN <cit.> detects 5 facial points, WingLoss <cit.> and DAC-CSR <cit.> detect 19 points, and the others detect 68 points. Some methods of them can do facial landmark detection directly on face images with any sizes. Tweaked CNN <cit.>, WingLoss <cit.> and ECT <cit.> need face cropping with size of 256*256 before facial landmark detection. MTCNN <cit.> is adopted in our work to do face detection and cropping due to its efficiency. §.§ Methods with 5 Points Figure <ref> (a) shows the five facial landmarks (left eye center, right eye center, mouth left corner, mouth right corner, nose tip) that are detected by Tweaked CNN <cit.> models. §.§.§ Tweaked CNN Model Based on the analysis that the features produced at intermediate layers of a convolutional neural network can be trained to regress facial landmark coordinates, face images can be partitioned in an unsupervised manner into subsets containing faces in similar poses (i.e., 3D views) and facial properties (e.g., presence or absence of eye-wear). Therefore, Tweaked CNN (TCNN) <cit.> specializes in regressing the facial landmark coordinates of faces in specific poses and appearances. It is shown to outperform existing landmark detection methods in an extensive battery of tests on the AFW, ALFW, and 300W benchmarks. §.§ Methods with 19 Points Figure <ref> (b) shows the 19 facial landmarks containing 6 points on brow (left brow left corner, left brow center, left brow right corner, right brow left corner, right brow center, right brow right corner), 6 points on eyes (left eye left corner, left eye center, left eye right corner, right eye left corner, right eye center, right eye right corner), 3 points on nose (nose left, nose tip, nose right), 3 points on mouth (mouth left corner, mouth center, mouth right corner), and 1 point on chin (lower chin center) used by WingLoss <cit.> and DAC-CSR <cit.> models. §.§.§ WingLoss Model WingLoss <cit.> method presents a piece-wise loss function, namely Wing loss, for robust facial landmark localization in the wild with Convolutional Neural Networks (CNNs). The loss function pays more attention to small and medium range errors and amplifies the impact of errors from the interval (-w,w) by switching from L_1 loss to a modified logarithm function. The experimental results obtained on the AFLW (AFLW-Full protocol) and 300W datasets demonstrate the merits of the Wing loss function, and prove the superiority of the proposed method over the state-of-the-art approaches. §.§.§ DAC-CSR Model DAC-CSR <cit.>, namely Dynamic Attention-Controlled Cascaded Shape Regression architecture, is for robust facial landmark detection on unconstrained faces. It divides facial landmark detection into three cascaded sub-tasks: face bounding box refinement, general cascaded shape regression and attention-controlled cascaded shape regression. The first two stages refine initial face bounding boxes and output intermediate facial landmarks. Then, an online dynamic model selection method is used to choose appropriate domain-specific cascaded shape regressions for further landmark refinement. The key innovation of the DAC-CSR is the fault-tolerant mechanism, using fuzzy set sample weighting, for attention-controlled domain-specific model training. It uses two challenging face datasets, AFLW and COFW, to evaluate the performance of the DAC-CSR architecture. §.§ Methods with 68 Points Figure <ref> (c) shows the 68 facial landmarks including 17 contour landmarks and 51 inner landmarks. As shown in Figure <ref>, number 1-17 of points denote the contour landmarks and number 18-68 points denote the inner landmarks. §.§.§ PA-CNN Model PA-CNN Model <cit.> is short for Part-Aware Deep Convolutional Neural Network. It is an end-to-end regression framework for facial landmark localization. It encodes images into feature maps shared by all landmarks. Then, these features are sent into two independent sub-network modules to regress contour landmarks and inner landmarks, respectively. It incorporates the contour landmark sub-network and the inner landmark sub-network into a unified architecture. Contrary to others, this method does not involve multiple individual models or require auxiliary labels. More importantly, the framework treats landmarks on different facial part differently which helps to learn discriminative features. This method can directly detect landmarks on original images. It does not need face detection, cropping, and resizing. Extensive evaluations are conducted on 300W benchmark dataset. §.§.§ CE-CLM Model Constrained Local Models (CLMs) are a well-established family of methods for facial landmark detection. However, they have recently fallen out of favor to cascaded regression-based approaches. This is in part due to the inability of existing CLM local detectors to model the very complex individual landmark appearance that is affected by expression, illumination, facial hair, makeup, and accessories. CE-CLM <cit.> introduced a member of CLM family, Convolutional Experts Constrained Local Model (CE-CLM), in which it uses a local detector called Convolutional Experts Network (CEN). CEN brings together the advantages of neural architectures and mixtures of experts in an end-to-end framework. It is able to learn a mixture of experts that capture different appearance prototypes without the need of explicit attribute labeling, and is able to deal with varying appearance of landmarks by internally learning an ensemble of detectors, thus modeling landmark appearance prototypes. This is achieved through a Mixture of Expert Layer, which consists of decision neurons connected with non-negative weights to the final decision layer. Convolutional Experts Constrained Local Model (CE-CLM) algorithm consists of two main parts: response map computation using Convolutional Experts Network and shape parameter update. CE-CLM is able to perform well on facial landmark detection and is especially accurate and robust on challenging profile images. §.§.§ ECT Model The three-step framework named ECT (Estimation-Correction-Tuning) <cit.> is an effective and robust approach for facial landmark detection by combining data- and model-driven methods. Firstly, a Fully Convolutional Network (FCN) which is a data-driven method is trained to compute response maps of all facial landmark points, which makes full use of holistic information in a facial image for global estimation of facial landmarks. After that, the maximum points in the response maps are fitted with a pre-trained Point Distribution Model (PDM) to generate the initial facial shape. This model-driven method is able to correct the inaccurate locations of outliers by considering the shape prior information. Finally, a weighted version of Regularized Landmark Mean-Shift (RLMS) is employed to fine-tune the facial shape iteratively. This Estimation-Correction-Tuning process perfectly combines the advantages of the global robustness of data-driven method (FCN), outlier correction capability of model-driven method (PDM) and non-parametric optimization of RLMS. The method is able to produce satisfying detection results on face images with exaggerated expressions, large head poses, and partial occlusions. §.§.§ TCDCN Model Facial landmark detection has long been impeded by the problems of occlusion and pose variation. Instead of treating the detection task as a single and independent problem, TCDCN <cit.> investigate the possibility of improving detection robustness through multi-task learning. This tasks-constrained deep model can facilitate learning convergence with task-wise early stopping. It optimizes the facial landmark detection together with heterogeneous but subtly correlated tasks, e.g.head pose estimation and facial attribute inference. §.§ Measure Metric In our experiment, we adopt four mainly used measure metric: Normalized Mean Error(NME), Cumulative Error Distribution Curve (CED), Area Under the error Curve (AUC) and Failure rate. Normalized Mean Error is calculated using the Euclidean Distance (L_2 norm) between estimated points and groundtruth, and being normalized by the distance of two outer eye corners. For each point, landmark-wise normalized error is calculated by: e_i = x_(i)^e - x_(i)^g _2/d_io where, e_i is the i-th error value, x_(i)^e is the i-th estimated points, x_(i)^g is the i-th ground truth, and d_io is IOD, the inter-ocular distance, i.e. Euclidean distance between two outer eye corners. For every face, sample-wise NME is calculated by summarizing the normalized errors of all facial points by: e = ∑_i=1^n e_i where, e is the error value of face, n is the number of facial landmarks. Since the groudtruth of MOBIO faces contains 22 landmarks, the facial landmark detection methods we choose can detect different numbers of facial feature points (i.e., 5, 19, and 68), in our experiment, we choose the overlapped facial landmarks of groundtruth points and detected points. As shown in Figure <ref>, there are 5 (in 5), 16 (in 19), and 15 (in 68) overlapped points (blue dots) are used for evaluation. The overall normalized mean error is computed by: error = ∑_i=1^m e/m where, error is the overall normalized mean error, m is the number of faces. CED is the cumulative distribution function of normalized errors, which evaluates the fraction of facial landmarks changes as error threshold changes. It is a better way to handle outliers. In our experiment, we set the error value threshold as 0.08 and 0.1. We partition the error value range [0, 0.08] or [0, 0.1] into 80 or 100 segments with equal step size 0.001. For each error value point X, the fraction of face images whose error value is <= X is calculated. AUC means the area under the error curve CED: AUC_α = ∫_0^α f(e)de where, e is normalized error, f(e) is cumulative error distribution function, α is the upper bound used to calculate the define integration. In our experiment, α is set as 0.08 and 0.1. Failure Rate is to count the fraction of faces whose error value is greater than error value threshold, in our experiment, 0.08 and 0.1 too. § EXPERIMENTAL RESULTS This section, we describe the details of experiment implementation first, including details of each method, and then give a through evaluation result of these method on the generated mobile face images, finally provide a thorough comparison of these methods on other databases, e.g., 300W, AFW, AFLW, COFW. §.§ Implementation Details Seven facial landmark detection methods are selected. Table <ref> gives the detailed experiment information of these models. Some models <cit.> can deal with original images directly, and some others need inputting square faces <cit.> with fixed size. Most models adopt MTCNN as face detector before facial landmark detection. Different models can detect different numbers of faces. So during testing, only the visible landmarks are involved in the evaluation. For each comparison we use the biggest set of overlapping landmarks. For example, Tweaked CNN <cit.> detects 5 landmarks, and the biggest overlapping set with groundtruth (22 landmarks) is 5. WingLoss <cit.> and DAC-CSR <cit.> detect 19 landmarks, and finally 16 landmarks are used. PA-CNN <cit.>, CE-CLM <cit.>, ECT <cit.>, and TCDCN <cit.> adopt 15 landmarks in 68 for evaluation. §.§ Evaluation on Mobile Still Face Images Seven models are performed facial landmark detection on our generated face data based on MOBIO. Table <ref> provides the normalized mean error, AUC and failure rate when the thresholds are set to 0.08 and 0.1. One can see WingLoss <cit.> gains the lowest mean error and greatest AUC as the threshold is equal to 0.08 and 0.1. TCDCN <cit.> gains the greatest mean error and the lowest AUC as the threshold is equal to 0.08 and 0.1. CE-CLM <cit.> obtains the smallest failure rate when the failure rate is defined by the percentage of test images with more than 8% detection error. ECT <cit.> obtains the smallest failure rate when the failure rate is defined by the percentage of test images with more than 10% detection error. Figure <ref> gives the CED curve of all models. Figure <ref>(a) shows the curves with threshold as 0.08, and (b) as 0.1. One can see the performance in Figure <ref>(a) is similar to those in Figure <ref>(b). Although there is much ongoing research in computer vision approaches for face alignment, varying evaluation protocols, lack descriptions of critical details and the use of different experimental setting or datasets makes it hard to shed light on how to make an assessment of their cons and pros, and what are the important factors influential to performance. We try our best to make a through performance comparison of the selected methods on our data and other databases, such as 300W, AFLW, AFW, and COFW. Tweaked CNN <cit.> computes NME values on AFW and AFLW by normalizing the mean distance between predicted to ground truth landmark locations to a percent of the inter-ocular distance. It detects 5 facial feature points for each dataset. Tweaked CNN also detects 49 and 68 points on 300W and calculates AUC and failure rate with threshold as 0.1. For 49 points, AUC is 0.817 and failure rate is 1.17%. For 68 points, the AUC is 0.771 and failure rate is 1.95%. Both values of AUC are greater than that on our data (39.29%), and both failure rates are less than that on our data(9.46%). WingLoss <cit.> is evaluated on AFLW and 300W via calculating NME. For the AFLW dataset, AFLW-Full protocol is adopted, and the width (or height) of the given face bounding box as the normalization term. 1.65% is gained finally, which is much lower than that on our data. For 300W dataset, the NME uses the inter-pupil distance as the normalization term, and the face images involved in the 300W dataset have been semi-automatically annotated by 68 facial landmarks. The final size of the test set is 689. The test set is further divided into two subsets for evaluation, i.e. the common and challenging subsets. The common subset has 554 face images from the LFPW and HELEN test subsets and the challenging subset constitutes the 135 IBUG face images. 3.27% is gained finally on Common set which is lower than our faces (3.88%) DAC-CSR <cit.> is evaluated on AFLW and COFW. For AFLW, 19 landmarks per image without the two ear landmarks are opted. And two protocols (i.e., AFLW-full, AFLW-frontal) are used. AFLW-full uses 4,386 images for test and AFLW-frontal uses 1,165. The performance is measured in terms of the average error, normalized by face size. 2.27% and 1.81% are obtained on AFLW-full and AFLW-frontal, which are lower than that on MOBIO (4.68%). PA-CNN <cit.> evaluates the alignment accuracy on 300w by the mean error, which is measured by the distances between the predicted landmarks and the groundtruth, normalized by the inter-pupil distance. The 300w is divided into three sets, i.e., Common, Challenging and Fullset, and 4.82%, 9.80%, and 5.79% mean errors are gained. In them, the mean error on Common set is lower than ours (5.72%). CE-CLM <cit.> is also evaluated on the typical split Common set of 300w by NME, and gains 3.14% and 2.30% with outline (68) and without outline (49) separately, which are much lower than ours (4.75%). ECT <cit.> evaluated its performance on four databases (300w, AFLW, AFW, and COFW). NME (%), AUC, and/or Failure Rate (%) are calculated. The evaluation on 300W consists of two parts. The first part is conducted on the 300W test set provided officially by the 300W competition. The second part of the evaluation is performed on the fullset of 300W which is widely used in the literature. The error is normalized by the distance of outer corners of the eyes. Failure rate is calculated with the threshold set to 0.08 for the normalized point-to-point error. The AUC and failure rate are 45.98% and 3.17% with 68 points, which are higher than ours (38.23%, 1.08%), and 58.26% and 1.17% with 51 points on the test set of 300W competition which are higher too. And the NME are 4.66%, 7.96%, and 5.31% on the Common subset, Challenging subset, and Fullset of 300W. In them, the mean error on Common set is lower than ours (5.07%). The evaluation on AFLW-PIFA is performed by NME, which are 3.21% and 3.36% with 21 and 34 points. Both are lower than ours(5.07%). ECT <cit.> picked out 6 visible landmarks for evaluation on AFW. For NME, the normalized distance is the square root of the bounding box size provided in the AFW dataset. Finally, 2.62% is obtained, which is lower too (5.07%). TCDCN <cit.> is evaluated on 300W, AFLW, AFW and COFW using NME and failure rate. The mean error is measured by the distances between estimated landmarks and the ground truths, normalizing with respect to the inter-ocular distance. Mean error larger than 10% is reported as a failure. And the NME on Common Subset, Challenging Subset, and Fullset of 300W are 4.80%, 8.60%, and 5.54%. In them, the NME on Common Subset and Fullset are lower than ours (6.58%). Based on the abovementioned comparison, one can see it is a little bit difficult to tell clearly on which database the selected method perform better due to different measure metrics, and settings. However, in most cases, our mobile face images are more challenging than existing still face images. Our face data can be a new database for facial landmark detection evaluation with 22 facial landmarks as grountruth. § DISCUSSION AND CONCLUSION MOBIO is a mobile biometrics database captured almost exclusively using mobile phones, which provides a challenging test-bed both for face verification, speaker verification, and bi-modal verification. In this paper, we generate a mobile still face database with 20,600 images based on the MOBIO database and manually label all faces with 22 facial landmarks as groundtruth. Seven state-of-the-art facial landmark detection methods are adopted to evaluate their performance on these 20,600 face images. A thorough analysis about the result and the comparison on other databases are given too. The result shows that our dataset is a pretty challenging one for facial landmark detection. § ACKNOWLEDGMENTS This work was partly supported by a NSF-CITeR grant and a WV HEPC grant. The authors would like to thank the editors and the anonymous reviewers for the comments and suggestions to improve the manuscript. IEEEtran
http://arxiv.org/abs/2307.01692v1
20230704125524
The vanishing of the primary emission region in PKS 1510-089
[ "F. Aharonian", "F. Ait Benkhali", "J. Aschersleben", "H. Ashkar", "M. Backes", "V. Barbosa Martins", "J. Barnard", "R. Batzofin", "Y. Becherini", "D. Berge", "K. Bernloehr", "B. Bi", "M. de Bony de Lavergne", "M. Boettcher", "C. Boisson", "J. Bolmont", "J. Borowska", "M. Bouyahiaoui", "F. Bradascio", "M. Breuhaus", "R. Brose", "A. M. Brown", "F. Brun", "B. Bruno", "T. Bulik", "C. Burger-Scheidlin", "S. Caroff", "S. Casanova", "R. Cecil", "J. Celic", "M. Cerruti", "T. Chand", "S. Chandra", "A. Chen", "J. Chibueze", "O. Chibueze", "G. Cotter", "J. Damascene Mbarubucyeye", "I. D. Davids", "A. Djannati-Atai", "A. Dmytriiev", "V. Doroshenko", "K. Egberts", "S. Einecke", "J. -P. Ernenwein", "S. Fegan", "G. Fontaine", "M. Fuessling", "S. Funk", "S. Gabici", "S. Ghafourizadeh", "G. Giavitto", "D. Glawion", "J. F. Glicenstein", "P. Goswami", "G. Grolleron", "L. Haerer", "W. Hofmann", "T. L. Holch", "M. Holler", "D. Horns", "M. Jamrozy", "F. Jankowsky", "V. Joshi", "I. Jung-Richardt", "E. Kasai", "K. Katarzyski", "R. Khatoon", "B. Khelifi", "W. Klu/'zniak", "Nu. Komin", "K. Kosack", "D. Kostunin", "R. G. Lang", "S. Le Stum", "F. Leitl", "A. Lemiere", "J. -P. Lenain", "F. Leuschner", "A. Luashvili", "J. Mackey", "V. Marandon", "P. Marchegiani", "G. Marti-Devesa", "R. Marx", "A. Mehta", "M. Meyer", "A. Mitchell", "R. Moderski", "L. Mohrmann", "A. Montanari", "E. Moulin", "M. de Naurois", "J. Niemiec", "A. Priyana Noel", "P. O'Brien", "S. Ohm", "L. Olivera-Nieto", "E. de Ona Wilhelmi", "M. Ostrowski", "S. Panny", "M. Panter", "G. Peron", "D. A. Prokhorov", "G. Puehlhofer", "M. Punch", "A. Quirrenbach", "P. Reichherzer", "A. Reimer", "O. Reimer", "H. Ren", "F. Rieger", "G. Rowell", "B. Rudak", "H. Rueda Ricarte", "E. Ruiz-Velasco", "V. Sahakian", "H. Salzmann", "D. A. Sanchez", "A. Santangelo", "M. Sasaki", "F. Schuessler", "H. M. Schutte", "U. Schwanke", "J. N. S. Shapopi", "H. Sol", "A. Specovius", "S. Spencer", "L. Stawarz", "R. Steenkamp", "S. Steinmassl", "C. Steppa", "I. Sushch", "H. Suzuki", "T. Takahashi", "T. Tanaka", "R. Terrier", "N. Tsuji", "C. van Eldik", "B. van Soelen", "M. Vecchi", "J. Veh", "J. Vink", "T. Wach", "S. J. Wagner", "A. Wierzcholska", "M. Zacharias", "D. Zargaryan", "A. A. Zdziarski", "A. Zech", "S. Zouari", "N. Zywucka", "D. A. H. Buckley", "J. Cooper", "D. Groenewald" ]
astro-ph.HE
[ "astro-ph.HE", "astro-ph.GA" ]
J. Barnard, M. Böttcher, H.M. Schutte, M. Zacharias, Email: contact.hess@hess-experiment.eu Dublin Institute for Advanced Studies, 31 Fitzwilliam Place, Dublin 2, Ireland Max-Planck-Institut für Kernphysik, P.O. Box 103980, D 69029 Heidelberg, Germany Yerevan State University, 1 Alek Manukyan St, Yerevan 0025, Armenia Landessternwarte, Universität Heidelberg, Königstuhl 12, D 69117 Heidelberg, Germany Kapteyn Astronomical Institute, University of Groningen, Landleven 12, 9747 AD Groningen, The Netherlands 0000-0002-2153-1818]H. Ashkar Laboratoire Leprince-Ringuet, École Polytechnique, CNRS, Institut Polytechnique de Paris, F-91128 Palaiseau, France 0000-0002-9326-6400]M. Backes University of Namibia, Department of Physics, Private Bag 13301, Windhoek 10005, Namibia Centre for Space Research, North-West University, Potchefstroom 2520, South Africa 0000-0002-5085-8828]V. Barbosa Martins DESY, D-15738 Zeuthen, Germany Department of Physics, University of the Free State, PO Box 339, Bloemfontein 9300, South Africa 0000-0002-5797-3386]R. Batzofin Institut für Physik und Astronomie, Universität Potsdam, Karl-Liebknecht-Strasse 24/25, D 14476 Potsdam, Germany 0000-0002-2115-2930]Y. Becherini Université de Paris, CNRS, Astroparticule et Cosmologie, F-75013 Paris, France Department of Physics and Electrical Engineering, Linnaeus University, 351 95 Växjö, Sweden 0000-0002-2918-1824]D. Berge DESY, D-15738 Zeuthen, Germany Institut für Physik, Humboldt-Universität zu Berlin, Newtonstr. 15, D 12489 Berlin, Germany 0000-0001-8065-3252]K. Bernlöhr Max-Planck-Institut für Kernphysik, P.O. Box 103980, D 69029 Heidelberg, Germany Institut für Astronomie und Astrophysik, Universität Tübingen, Sand 1, D 72076 Tübingen, Germany 0000-0002-4650-1666]M. de Bony de Lavergne IRFU, CEA, Université Paris-Saclay, F-91191 Gif-sur-Yvette, France 0000-0002-8434-5692]M. Böttcher Centre for Space Research, North-West University, Potchefstroom 2520, South Africa 0000-0001-5893-1797]C. Boisson Laboratoire Univers et Théories, Observatoire de Paris, Université PSL, CNRS, Université de Paris, 92190 Meudon, France Sorbonne Université, Université Paris Diderot, Sorbonne Paris Cité, CNRS/IN2P3, Laboratoire de Physique Nucléaire et de Hautes Energies, LPNHE, 4 Place Jussieu, F-75252 Paris, France Institut für Physik, Humboldt-Universität zu Berlin, Newtonstr. 15, D 12489 Berlin, Germany Max-Planck-Institut für Kernphysik, P.O. Box 103980, D 69029 Heidelberg, Germany IRFU, CEA, Université Paris-Saclay, F-91191 Gif-sur-Yvette, France 0000-0003-0268-5122]M. Breuhaus Max-Planck-Institut für Kernphysik, P.O. Box 103980, D 69029 Heidelberg, Germany 0000-0002-8312-6930]R. Brose Dublin Institute for Advanced Studies, 31 Fitzwilliam Place, Dublin 2, Ireland University of Oxford, Department of Physics, Denys Wilkinson Building, Keble Road, Oxford OX1 3RH, UK 0000-0003-0770-9007]F. Brun IRFU, CEA, Université Paris-Saclay, F-91191 Gif-sur-Yvette, France Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen Centre for Astroparticle Physics, Nikolaus-Fiebiger-Str. 2, D 91058 Erlangen, Germany Astronomical Observatory, The University of Warsaw, Al. Ujazdowskie 4, 00-478 Warsaw, Poland Dublin Institute for Advanced Studies, 31 Fitzwilliam Place, Dublin 2, Ireland 0000-0002-1103-130X]S. Caroff Université Savoie Mont Blanc, CNRS, Laboratoire d'Annecy de Physique des Particules - IN2P3, 74000 Annecy, France 0000-0002-6144-9122]S. Casanova Instytut Fizyki Ja̧drowej PAN, ul. Radzikowskiego 152, 31-342 Kraków, Poland Universität Hamburg, Institut für Experimentalphysik, Luruper Chaussee 149, D 22761 Hamburg, Germany Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen Centre for Astroparticle Physics, Nikolaus-Fiebiger-Str. 2, D 91058 Erlangen, Germany 0000-0001-7891-699X]M. Cerruti Université de Paris, CNRS, Astroparticule et Cosmologie, F-75013 Paris, France Centre for Space Research, North-West University, Potchefstroom 2520, South Africa Centre for Space Research, North-West University, Potchefstroom 2520, South Africa 0000-0001-6425-5692]A. Chen School of Physics, University of the Witwatersrand, 1 Jan Smuts Avenue, Braamfontein, Johannesburg, 2050 South Africa Centre for Space Research, North-West University, Potchefstroom 2520, South Africa Centre for Space Research, North-West University, Potchefstroom 2520, South Africa 0000-0002-9975-1829]G. Cotter University of Oxford, Department of Physics, Denys Wilkinson Building, Keble Road, Oxford OX1 3RH, UK 0000-0002-4991-6576]J. Damascene Mbarubucyeye DESY, D-15738 Zeuthen, Germany University of Namibia, Department of Physics, Private Bag 13301, Windhoek 10005, Namibia 0000-0002-4924-1708]A. Djannati-Ataï Université de Paris, CNRS, Astroparticule et Cosmologie, F-75013 Paris, France Centre for Space Research, North-West University, Potchefstroom 2520, South Africa Institut für Astronomie und Astrophysik, Universität Tübingen, Sand 1, D 72076 Tübingen, Germany Institut für Physik und Astronomie, Universität Potsdam, Karl-Liebknecht-Strasse 24/25, D 14476 Potsdam, Germany School of Physical Sciences, University of Adelaide, Adelaide 5005, Australia Aix Marseille Université, CNRS/IN2P3, CPPM, Marseille, France Laboratoire Leprince-Ringuet, École Polytechnique, CNRS, Institut Polytechnique de Paris, F-91128 Palaiseau, France 0000-0002-6443-5025]G. Fontaine Laboratoire Leprince-Ringuet, École Polytechnique, CNRS, Institut Polytechnique de Paris, F-91128 Palaiseau, France DESY, D-15738 Zeuthen, Germany 0000-0002-2012-0080]S. Funk Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen Centre for Astroparticle Physics, Nikolaus-Fiebiger-Str. 2, D 91058 Erlangen, Germany Université de Paris, CNRS, Astroparticule et Cosmologie, F-75013 Paris, France Landessternwarte, Universität Heidelberg, Königstuhl 12, D 69117 Heidelberg, Germany 0000-0002-7629-6499]G. Giavitto DESY, D-15738 Zeuthen, Germany 0000-0003-4865-7696]D. Glawion Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen Centre for Astroparticle Physics, Nikolaus-Fiebiger-Str. 2, D 91058 Erlangen, Germany 0000-0003-2581-1742]J.F. Glicenstein IRFU, CEA, Université Paris-Saclay, F-91191 Gif-sur-Yvette, France Centre for Space Research, North-West University, Potchefstroom 2520, South Africa Sorbonne Université, Université Paris Diderot, Sorbonne Paris Cité, CNRS/IN2P3, Laboratoire de Physique Nucléaire et de Hautes Energies, LPNHE, 4 Place Jussieu, F-75252 Paris, France Max-Planck-Institut für Kernphysik, P.O. Box 103980, D 69029 Heidelberg, Germany Max-Planck-Institut für Kernphysik, P.O. Box 103980, D 69029 Heidelberg, Germany 0000-0001-5161-1168]T. L. Holch DESY, D-15738 Zeuthen, Germany Leopold-Franzens-Universität Innsbruck, Institut für Astro- und Teilchenphysik, A-6020 Innsbruck, Austria Universität Hamburg, Institut für Experimentalphysik, Luruper Chaussee 149, D 22761 Hamburg, Germany 0000-0002-0870-7778]M. Jamrozy Obserwatorium Astronomiczne, Uniwersytet Jagielloński, ul. Orla 171, 30-244 Kraków, Poland Landessternwarte, Universität Heidelberg, Königstuhl 12, D 69117 Heidelberg, Germany 0000-0003-4467-3621]V. Joshi Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen Centre for Astroparticle Physics, Nikolaus-Fiebiger-Str. 2, D 91058 Erlangen, Germany Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen Centre for Astroparticle Physics, Nikolaus-Fiebiger-Str. 2, D 91058 Erlangen, Germany University of Namibia, Department of Physics, Private Bag 13301, Windhoek 10005, Namibia Institute of Astronomy, Faculty of Physics, Astronomy and Informatics, Nicolaus Copernicus University, Grudziadzka 5, 87-100 Torun, Poland Centre for Space Research, North-West University, Potchefstroom 2520, South Africa 0000-0001-6876-5577]B. Khélifi Université de Paris, CNRS, Astroparticule et Cosmologie, F-75013 Paris, France Nicolaus Copernicus Astronomical Center, Polish Academy of Sciences, ul. Bartycka 18, 00-716 Warsaw, Poland 0000-0003-3280-0582]Nu. Komin School of Physics, University of the Witwatersrand, 1 Jan Smuts Avenue, Braamfontein, Johannesburg, 2050 South Africa IRFU, CEA, Université Paris-Saclay, F-91191 Gif-sur-Yvette, France 0000-0002-0487-0076]D. Kostunin DESY, D-15738 Zeuthen, Germany Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen Centre for Astroparticle Physics, Nikolaus-Fiebiger-Str. 2, D 91058 Erlangen, Germany Aix Marseille Université, CNRS/IN2P3, CPPM, Marseille, France Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen Centre for Astroparticle Physics, Nikolaus-Fiebiger-Str. 2, D 91058 Erlangen, Germany Université de Paris, CNRS, Astroparticule et Cosmologie, F-75013 Paris, France 0000-0001-7284-9220]J.-P. Lenain Sorbonne Université, Université Paris Diderot, Sorbonne Paris Cité, CNRS/IN2P3, Laboratoire de Physique Nucléaire et de Hautes Energies, LPNHE, 4 Place Jussieu, F-75252 Paris, France 0000-0001-9037-0272]F. Leuschner Institut für Astronomie und Astrophysik, Universität Tübingen, Sand 1, D 72076 Tübingen, Germany 0000-0003-4384-1638]A. Luashvili Laboratoire Univers et Théories, Observatoire de Paris, Université PSL, CNRS, Université de Paris, 92190 Meudon, France 0000-0002-5449-6131]J. Mackey Dublin Institute for Advanced Studies, 31 Fitzwilliam Place, Dublin 2, Ireland 0000-0001-9077-4058]V. Marandon IRFU, CEA, Université Paris-Saclay, F-91191 Gif-sur-Yvette, France 0000-0001-7487-8287]P. Marchegiani School of Physics, University of the Witwatersrand, 1 Jan Smuts Avenue, Braamfontein, Johannesburg, 2050 South Africa 0000-0003-0766-6473]G. Martí-Devesa Leopold-Franzens-Universität Innsbruck, Institut für Astro- und Teilchenphysik, A-6020 Innsbruck, Austria 0000-0002-6557-4924]R. Marx Landessternwarte, Universität Heidelberg, Königstuhl 12, D 69117 Heidelberg, Germany DESY, D-15738 Zeuthen, Germany Universität Hamburg, Institut für Experimentalphysik, Luruper Chaussee 149, D 22761 Hamburg, Germany 0000-0003-3631-5648]A. Mitchell Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen Centre for Astroparticle Physics, Nikolaus-Fiebiger-Str. 2, D 91058 Erlangen, Germany Nicolaus Copernicus Astronomical Center, Polish Academy of Sciences, ul. Bartycka 18, 00-716 Warsaw, Poland 0000-0002-9667-8654]L. Mohrmann Max-Planck-Institut für Kernphysik, P.O. Box 103980, D 69029 Heidelberg, Germany 0000-0002-3620-0173]A. Montanari Landessternwarte, Universität Heidelberg, Königstuhl 12, D 69117 Heidelberg, Germany 0000-0003-4007-0145]E. Moulin IRFU, CEA, Université Paris-Saclay, F-91191 Gif-sur-Yvette, France Laboratoire Leprince-Ringuet, École Polytechnique, CNRS, Institut Polytechnique de Paris, F-91128 Palaiseau, France 0000-0001-6036-8569]J. Niemiec Instytut Fizyki Ja̧drowej PAN, ul. Radzikowskiego 152, 31-342 Kraków, Poland Obserwatorium Astronomiczne, Uniwersytet Jagielloński, ul. Orla 171, 30-244 Kraków, Poland Department of Physics and Astronomy, The University of Leicester, University Road, Leicester, LE1 7RH, United Kingdom 0000-0002-3474-2243]S. Ohm DESY, D-15738 Zeuthen, Germany 0000-0002-9105-0518]L. Olivera-Nieto Max-Planck-Institut für Kernphysik, P.O. Box 103980, D 69029 Heidelberg, Germany DESY, D-15738 Zeuthen, Germany 0000-0002-9199-7031]M. Ostrowski Obserwatorium Astronomiczne, Uniwersytet Jagielloński, ul. Orla 171, 30-244 Kraków, Poland 0000-0001-5770-3805]S. Panny Leopold-Franzens-Universität Innsbruck, Institut für Astro- und Teilchenphysik, A-6020 Innsbruck, Austria Max-Planck-Institut für Kernphysik, P.O. Box 103980, D 69029 Heidelberg, Germany Université de Paris, CNRS, Astroparticule et Cosmologie, F-75013 Paris, France GRAPPA, Anton Pannekoek Institute for Astronomy, University of Amsterdam, Science Park 904, 1098 XH Amsterdam, The Netherlands 0000-0003-4632-4644]G. Pühlhofer Institut für Astronomie und Astrophysik, Universität Tübingen, Sand 1, D 72076 Tübingen, Germany 0000-0002-4710-2165]M. Punch Université de Paris, CNRS, Astroparticule et Cosmologie, F-75013 Paris, France Landessternwarte, Universität Heidelberg, Königstuhl 12, D 69117 Heidelberg, Germany 0000-0003-4513-8241]P. Reichherzer IRFU, CEA, Université Paris-Saclay, F-91191 Gif-sur-Yvette, France 0000-0001-8604-7077]A. Reimer Leopold-Franzens-Universität Innsbruck, Institut für Astro- und Teilchenphysik, A-6020 Innsbruck, Austria Leopold-Franzens-Universität Innsbruck, Institut für Astro- und Teilchenphysik, A-6020 Innsbruck, Austria Max-Planck-Institut für Kernphysik, P.O. Box 103980, D 69029 Heidelberg, Germany Max-Planck-Institut für Kernphysik, P.O. Box 103980, D 69029 Heidelberg, Germany 0000-0002-9516-1581]G. Rowell School of Physical Sciences, University of Adelaide, Adelaide 5005, Australia 0000-0003-0452-3805]B. Rudak Nicolaus Copernicus Astronomical Center, Polish Academy of Sciences, ul. Bartycka 18, 00-716 Warsaw, Poland 0000-0001-9833-7637]H. Rueda Ricarte IRFU, CEA, Université Paris-Saclay, F-91191 Gif-sur-Yvette, France 0000-0001-6939-7825]E. Ruiz-Velasco Max-Planck-Institut für Kernphysik, P.O. Box 103980, D 69029 Heidelberg, Germany 0000-0003-1198-0043]V. Sahakian Yerevan Physics Institute, 2 Alikhanian Brothers St., 0036 Yerevan, Armenia Institut für Astronomie und Astrophysik, Universität Tübingen, Sand 1, D 72076 Tübingen, Germany Université Savoie Mont Blanc, CNRS, Laboratoire d'Annecy de Physique des Particules - IN2P3, 74000 Annecy, France 0000-0003-4187-9560]A. Santangelo Institut für Astronomie und Astrophysik, Universität Tübingen, Sand 1, D 72076 Tübingen, Germany 0000-0001-5302-1866]M. Sasaki Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen Centre for Astroparticle Physics, Nikolaus-Fiebiger-Str. 2, D 91058 Erlangen, Germany 0000-0003-1500-6571]F. Schüssler IRFU, CEA, Université Paris-Saclay, F-91191 Gif-sur-Yvette, France 0000-0002-1769-5617]H.M. Schutte Centre for Space Research, North-West University, Potchefstroom 2520, South Africa Institut für Physik, Humboldt-Universität zu Berlin, Newtonstr. 15, D 12489 Berlin, Germany 0000-0002-7130-9270]J.N.S. Shapopi University of Namibia, Department of Physics, Private Bag 13301, Windhoek 10005, Namibia Laboratoire Univers et Théories, Observatoire de Paris, Université PSL, CNRS, Université de Paris, 92190 Meudon, France 0000-0002-1156-4771]A. Specovius Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen Centre for Astroparticle Physics, Nikolaus-Fiebiger-Str. 2, D 91058 Erlangen, Germany 0000-0001-5516-1205]S. Spencer Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen Centre for Astroparticle Physics, Nikolaus-Fiebiger-Str. 2, D 91058 Erlangen, Germany Obserwatorium Astronomiczne, Uniwersytet Jagielloński, ul. Orla 171, 30-244 Kraków, Poland University of Namibia, Department of Physics, Private Bag 13301, Windhoek 10005, Namibia 0000-0002-2865-8563]S. Steinmassl Max-Planck-Institut für Kernphysik, P.O. Box 103980, D 69029 Heidelberg, Germany Institut für Physik und Astronomie, Universität Potsdam, Karl-Liebknecht-Strasse 24/25, D 14476 Potsdam, Germany 0000-0002-2814-1257]I. Sushch Centre for Space Research, North-West University, Potchefstroom 2520, South Africa Department of Physics, Konan University, 8-9-1 Okamoto, Higashinada, Kobe, Hyogo 658-8501, Japan Kavli Institute for the Physics and Mathematics of the Universe (WPI), The University of Tokyo Institutes for Advanced Study (UTIAS), The University of Tokyo, 5-1-5 Kashiwa-no-Ha, Kashiwa, Chiba, 277-8583, Japan 0000-0002-4383-0368]T. Tanaka Department of Physics, Konan University, 8-9-1 Okamoto, Higashinada, Kobe, Hyogo 658-8501, Japan 0000-0002-8219-4667]R. Terrier Université de Paris, CNRS, Astroparticule et Cosmologie, F-75013 Paris, France 0000-0001-7209-9204]N. Tsuji RIKEN, 2-1 Hirosawa, Wako, Saitama 351-0198, Japan 0000-0001-9669-645X]C. van Eldik Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen Centre for Astroparticle Physics, Nikolaus-Fiebiger-Str. 2, D 91058 Erlangen, Germany Department of Physics, University of the Free State, PO Box 339, Bloemfontein 9300, South Africa Kapteyn Astronomical Institute, University of Groningen, Landleven 12, 9747 AD Groningen, The Netherlands 0000-0003-4736-2167]J. Veh Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen Centre for Astroparticle Physics, Nikolaus-Fiebiger-Str. 2, D 91058 Erlangen, Germany GRAPPA, Anton Pannekoek Institute for Astronomy, University of Amsterdam, Science Park 904, 1098 XH Amsterdam, The Netherlands Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen Centre for Astroparticle Physics, Nikolaus-Fiebiger-Str. 2, D 91058 Erlangen, Germany 0000-0002-7474-6062]S.J. Wagner Landessternwarte, Universität Heidelberg, Königstuhl 12, D 69117 Heidelberg, Germany 0000-0003-4472-7204]A. Wierzcholska Instytut Fizyki Ja̧drowej PAN, ul. Radzikowskiego 152, 31-342 Kraków, Poland 0000-0001-5801-3945]M. Zacharias Landessternwarte, Universität Heidelberg, Königstuhl 12, D 69117 Heidelberg, Germany Centre for Space Research, North-West University, Potchefstroom 2520, South Africa 0000-0002-2876-6433]D. Zargaryan Dublin Institute for Advanced Studies, 31 Fitzwilliam Place, Dublin 2, Ireland 0000-0002-0333-2452]A.A. Zdziarski Nicolaus Copernicus Astronomical Center, Polish Academy of Sciences, ul. Bartycka 18, 00-716 Warsaw, Poland Laboratoire Univers et Théories, Observatoire de Paris, Université PSL, CNRS, Université de Paris, 92190 Meudon, France 0000-0002-5333-2004]S. Zouari Université de Paris, CNRS, Astroparticule et Cosmologie, F-75013 Paris, France Centre for Space Research, North-West University, Potchefstroom 2520, South Africa 153H.E.S.S. Collaboration South African Astronomical Observatory, PO Box 9, Observatory 7935, South Africa Southern African Large Telescope Foundation, PO Box 9, Observatory 7935, South Africa Department of Physics, University of the Free State, PO Box 339, Bloemfontein 9300, South Africa Department of Astronomy, University of Cape Town, Private Bag X3, Rondebosch 7701, South Africa Department of Physics, University of the Free State, PO Box 339, Bloemfontein 9300, South Africa South African Astronomical Observatory, PO Box 9, Observatory 7935, South Africa Southern African Large Telescope Foundation, PO Box 9, Observatory 7935, South Africa 3 In July 2021, exhibited a significant flux drop in the high-energy -ray (by a factor 10) and optical (by a factor 5) bands and remained in this low state throughout 2022. Similarly, the optical polarization in the source vanished, resulting in the optical spectrum being fully explained through the steady flux of the accretion disk and the broad-line region. Unlike the aforementioned bands, the very-high-energy -ray and X-ray fluxes did not exhibit a significant flux drop from year to year. This suggests that the steady-state very-high-energy -ray and X-ray fluxes originate from a different emission region than the vanished parts of the high-energy -ray and optical jet fluxes. The latter component has disappeared through either a swing of the jet away from the line-of-sight or a significant drop in the photon production efficiency of the jet close to the black hole. Either change could become visible in high-resolution radio images. § INTRODUCTION As the relativistic jets of blazars are almost aligned with the line-of-sight, the emission region producing most of the jet's radiation can be studied in great detail owing to the Doppler beaming of the radiation. The observed variability implies a compact emission region leading to the one-zone model <cit.>. In the leptonic version of this model, a single electron distribution is responsible for the multiwavelength (MWL) emission through synchrotron emission and inverse-Compton (IC) scattering of ambient photon fields, such as synchrotron, accretion disk (AD), broad-line region (BLR) or dusty torus (DT) photons. In some extensions of the model, relativistic protons may also influence the production of rays <cit.>. is a flat-spectrum radio quasar (FSRQ) at redshift z=0.361 <cit.>. It is one of the few FSRQs detected at very-high-energy (VHE, E>100GeV) rays[For an up-to-date list, see <http://tevcat2.uchicago.edu/>.] <cit.>. FSRQs are blazars with bright optical emission lines implying the presence of a strong BLR. Hence, the VHE emission zone must be located at the edge of or beyond the BLR in order to avoid the strong absorption of VHE photons. In turn, models were developed that explained the spectral energy distribution (SED) of either through the necessity of multiple target photon fields for the IC process <cit.> or through two spatially separate emission zones <cit.> with a primary emission zone within the BLR and a secondary emission zone several parsec from the black hole within the DT. is known for its complex MWL behavior <cit.> without clear correlation patterns between energy bands. One of the most spectacular flares was the VHE flare in 2016 <cit.> with only moderate counterparts in the high-energy (HE, E>100MeV) -ray and optical bands. However, unlike all other FSRQs detected at VHE rays, also emits VHE photons in times of quiescence. <cit.> integrated their data taken during times without any MWL flaring activity. Their VHE spectrum is a near-perfect continuation of the HE spectrum allowing for the application of the one-zone model in both a near-zone and a far-zone scenario. In the near-zone scenario, the emission region is located close to the edge of the BLR about 0.1pc from the black hole, while the far-zone emission region is located at about 1 pc from the black hole within the DT. Similarly, <cit.> independently derived a HE -ray low-state spectrum of , which they coupled with radio and X-ray observations of the extended kpc-scale jet explaining the SED in terms of an IC model scattering the cosmic microwave background (CMB). In this paper, a sudden change in the appearance of is reported. While flares had become less and less frequent since about 2017,[See, e.g., the public light curves: <https://fermi.gsfc.nasa.gov/ssc/data/access/lat/msl_lc/source/1510-089>.] in July 2021 the source suddenly and abruptly dropped in HE and optical flux as seen in observations with and ATOM, respectively. Similarly, the optical polarization in measured with SALT vanished. Meanwhile, the VHE and X-ray fluxes observed with H.E.S.S. and the Neil Gehrels Swift observatory (hereafter Swift), respectively, remained almost steady. § DATA ANALYSIS §.§ Very-high-energy rays The five telescopes of the array recording VHE rays are located in the Khomas Highland in Namibia at an altitude of about 1800m. Four telescopes (CT1-4) with 106 m^2 mirror area each, are laid out in a square of 120 m side length giving an optimal energy threshold of ∼ 100GeV. A fifth telescope (CT5) with 600 m^2 mirror area is located in the center of the square. In this study, data recorded with CT1-4 are used. For the observations in 2021 (MJD 59311-59382) and 2022 (MJD 59672-59794), standard quality selection <cit.> results in acceptance corrected observation times of 50.9h in 2021 and 36.5h in 2022, respectively. The data sets have been analyzed with the Model analysis chain <cit.> using very loose cuts. These cuts provide the lowest possible energy threshold with 129GeV and 106GeV in 2021 and 2022, respectively. The results have been cross-checked and verified using the independent reconstruction and analysis chain ImPACT <cit.> providing consistent results. is detected with a significance of 13.5σ in 2021, and with 10.3σ in 2022. In order to derive the light curves and photon spectra, instrument response functions were created using <cit.>, which accurately reproduce the atmospheric and instrumental conditions for each observation. There is no significant variability in the period-wise light curve [cf., Fig. <ref>(a)]. In both years, the spectra are consistent with power laws of the form F(E) = N(E_0)×( E/E_0)^-Γ, where N is the normalization at decorrelation energy E_0, and Γ is the spectral index. The parameters for 2021 are N=(17± 1^+6_-5)-12ph cm^-2s^-1TeV^-1, E_0=256GeV, and Γ=3.4± 0.1± 0.4. In 2022, the spectral parameters are N=(8.8± 0.7^+2.9_-2.4)-12ph cm^-2s^-1TeV^-1, E_0=296GeV, and Γ=3.0± 0.1± 0.4. The main systematic error is the uncertainty of 10% on the energy scale. The spectra are shown in Fig. <ref> (top) along with spectra from the detection <cit.> and the low-state spectrum of <cit.>. The latter is compatible with both spectra of 2021 and 2022, while the initial detection spectrum agrees with the new ones at the highest energies. §.§ High-energy rays monitors the HE γ-ray sky every three hours in the energy range from 20MeV to beyond 300GeV <cit.>. The analysis was performed with the FermiTools[<https://github.com/fermi-lat/Fermitools-conda/wiki>] version 2.2.0 software package employing the P8R3_SOURCE_V3[<http://fermi.gsfc.nasa.gov/ssc/data/analysis/documentation/Cicerone/Cicerone_LAT_IRFs/IRF_overview.html>] instrument response functions and the gll_iem_v07 and iso_P8R3_SOURCE_V3_v1 models[<http://fermi.gsfc.nasa.gov/ssc/data/access/lat/BackgroundModels.html>] for the Galactic and isotropic diffuse emissions <cit.>, respectively. A binned analysis of the SOURCE class events between energies of 100MeV and 500GeV was performed for a region of interest (ROI) with radius 10^∘ centred at the nominal position of . In order to reduce contamination from the Earth Limb, a zenith angle cut of 90^∘ was applied. Sources within a region of radius 15^∘ around listed in the 4FGL-DR3 catalog <cit.> have been accounted for in the likelihood analysis. The likelihood fitting procedure is iterative <cit.>. First, all parameters from a source are fixed if a hint of emission from that object is detected with a test statistics[The TS value is defined as twice the difference of log-likelihood values of the optimised ROI model with and without the source included, TS = -2(lnℒ_1 - lnℒ_0) <cit.>.] of TS<9 and if the predicted number of photons from that source contributes less than 5% of the total of photon counts within the ROI. Second, only spectral parameters of sources within 3^∘ from are left free to vary. All other source parameters are fixed to their respective 4FGL values, which are also used for all sources included in the model as seed inputs. The normalization of the Galactic and isotropic background templates are left as additional free parameters. Neither the residual nor count maps show any particular hot spots above a significance at the ∼2 σ level. Therefore, the best-fit model describes the ROI well. The best-fit ROI model is then used to derive light curves of in the time range from January 2021 to September 2022 with a binning of 3 and 7 days, respectively. They are shown in Fig. <ref>(b). In the first half of 2021, the light curve was variable within a factor of 3 around its average integral flux of ∼ 4.3-7ph cm^-2s^-1 in the [100MeV; 500GeV] energy range. This average is below the 4FGL-DR3 catalog [indicated by the gray dashed line in Fig. <ref>(b)]. However, on 2021 July 18 (MJD 59413) the flux decreased significantly to an average value of ∼ 6-8ph cm^-2s^-1, which is more than one order of magnitude below the 4FGL-DR3 value. For the spectral analysis, two time ranges have been considered that coincide with the H.E.S.S. observation windows in 2021 (MJD 59311-59382) and 2022 (MJD 59672-59794). In 2021, the differential photon spectrum of is described with a log-parabola function, which improves the spectral fit with respect to a pure power-law at a 3.3σ confidence level, dN/dE = N(E_0)×( E/E_0)^-Γ-βlog(E/E_0), with normalization N=(3.61 ± 0.31)-11ph cm^-2s^-1MeV^-1, pivot energy E_0=881 MeV fixed at the 4FGL-DR3 value, photon index Γ = 2.42± 0.07 and curvature β = 0.05± 0.04. This spectrum is fully compatible with the 4FGL-DR3 catalog except for the normalization. In 2022, the spectrum is compatible with a simple power-law[A log-parabolic spectral shape is also tested for, but does not yield a better fit of the data with respect to a power-law.] with normalization N=(7.36 ± 0.92)-12ph cm^-2s^-1MeV^-1, pivot energy E_0=881 MeV, and photon index Γ = 2.1± 0.1. This spectrum is much harder than the typical spectrum of , and its normalization is much reduced. The change in flux and shape is clearly visible in Fig. <ref>(top). In order to verify that the change in spectral shape coincided with the flux drop, two more power-law spectra have been derived for the time ranges MJD 59397-59411 and MJD 59415-59429 on either side of 2021 July 18 (MJD 59413). The spectral indices are 2.57 ± 0.09 and 2.1 ± 0.1, respectively. These are compatible with the spectral shapes obtained for the longer periods confirming that the spectrum changed at the same time as the flux dropped. §.§ X-rays Swift <cit.> is a multi-frequency observatory for the X-ray and optical domain. X-ray data in the energy range of 0.3-10 keV collected with the X-ray Telescope <cit.> have been analyzed from 2021 and 2022, corresponding to the ObsIDs 00030797022-00030797027 and 00031173220-00030797029. They were taken in photon counting mode. The data analysis was performed using the HEASOFT software (version 6.31), while for the recalibration the standard procedure was used. <cit.> was employed for the spectral fitting. All observations have been binned so that each bin contains at least 30 counts and each individual observation has been fitted with a single power-law model with a Galactic absorption value of N_H = 7.13 20 cm^-2 <cit.> set as a frozen parameter. The XRT light curve is shown in Fig. <ref>(c). The flux is consistent with being constant in 2021. The average flux in 2022 is reduced by less than a factor 2 compared to 2021, even though the flux varies mildly around the average (see Tab. <ref>). The average spectral shapes of 2021 and 2022 are very similar (see Fig. <ref>, middle and bottom, and Tab. <ref>). §.§ Optical/UV data §.§.§ Photometry Optical/UV photometry data have been collected with the Ultraviolet/Optical Telescope <cit.> onboard Swift in six filters — UVW2 (192.8 nm), UVM2 (224.6 nm), UVW1 (260.0 nm), U (346.5 nm), B (439.2 nm), and V (546.8 nm) <cit.> — as well as with the Automatic Telescope for Optical Monitoring <cit.> with high cadence in BR filters. For UVOT, magnitudes and corresponding fluxes have been calculated using including all photons from a circular region with radius 5”. In order to determine the background, a circular region with a radius of 10” located near the source area has been selected. All data points are corrected for dust absorption using the reddening E(B-V) = 0.0853 mag <cit.> and the ratios of the extinction to reddening, A_λ / E(B-V) from <cit.>. The ATOM data were analysed using the fully automated ATOM Data Reduction and Analysis Software and their quality has been checked manually. The resulting flux was calculated via differential photometry using five custom-calibrated secondary standard stars in the same field of view. Extinction correction was done as for Swift-UVOT. The light curves in R and B filters are shown in Fig. <ref>(d). While variability is clearly visible in the 2021 data, the 2022 light curves show no significant variations. The fractional variability in the R- and B-band in 2022 is 3% and 2%, respectively. The change in behavior seems to occur near-simultaneously with the flux drop in the HE -ray band, but the data is very sparse after July 2021, which is why a firm conclusion cannot be drawn. Interestingly, the R-B color also shows variability [see Fig. <ref>(e)]. In the high flux states in 2021, the R-band flux is higher than the B-band flux, while it is inverted for the low flux states, which is especially noticeable in 2022. In terms of B-R color, this change happens at B-R≈ 0.6mag. For the spectra shown in Fig. <ref> and <ref>, fluxes in given filters have been averaged within the observation range of H.E.S.S., namely MJD 59311–59382 for 2021 and MJD 59672–59794 for 2022. While this includes some variability in 2021, it does not, for instance, include the peak in early July. Nonetheless, the high variability in 2021 results in an average of the ATOM data that cannot be properly compared to the Swift-UVOT averages, which were taken on at most six occasions and not necessarily parallel to the ATOM data. Therefore in Sec. <ref>, the R-band average from ATOM is treated as an upper limit for the 2021 data set, while the spetral fitting is done on the V, B, U, and UVM2 bands of Swift-UVOT. §.§.§ Spectropolarimetry Optical spectropolarimetric observations of were taken with the Southern African Large Telescope <cit.>, using the Robert Stobie Spectrograph <cit.>. was observed eight times between 2021 April 06 and 2021 June 10, and eleven times between 2022 April 25 and 2022 July 31. All observations were performed using grating PG0900 at a grating angle of 12.875^∘ with a slit width of 1.25” giving a resolving power of R ≈ 800 - 1200. Observations were performed in linear mode which takes four observations at 4 wave plate angles. A total exposure time of 1200 s (4×300 s) was used for the first eight observations, and 1440 s (4×360 s) for the remaining observations. Data reduction was performed using a modified version of the pySALT/polSALT pipeline <cit.>[<https://github.com/saltastro/polsalt>] allowing for the wavelength calibration to be performed with IRAF[Version 2.16] <cit.>. The average degree of polarization was calculated for each observation in four different wavelength bands (see Fig. <ref>(f) and (g)), namely λ = 3670 - 4060Å, λ = 4100 - 4400Å, λ = 4480 - 4780Å, and λ = 4800 - 5100Å, chosen to avoid spectral features. During the 2021 observing period, the source exhibited variable levels of polarization, reaching a maximum of ⟨Π⟩ = 12.5 ± 1.1 % on 2021 May 08 (taken between λ = 4100 - 6200Å), and a minimum of ⟨Π⟩ = 2.2 ± 0.5 % on 2021 April 20. During the 2021 semester, the polarization angle varied by ∼ 174^∘ (reaching a maximum of 178.9 ± 4.8^∘ on 2021 April 20, and a minimum of 4.7 ± 2.7^∘ on 2021 April 09). During 2022, the source exhibited little to no variation in the degree of polarization, consistently remaining below 2 %. This is consistent with the level of polarization measured for a comparison star. Thus, the observed polarization can be attributed to interstellar effects, rather than any source-intrinsic polarization. § RESULTS The MWL light curves and spectra of are shown in Figs. <ref> and <ref>, respectively. They show the aforementioned change in the source: most notably the HE -ray flux drop and spectral change, as well as the optical flux and polarization drop. These took place at a seemingly singular event around 2021 July 18 (MJD 59413). Interestingly, the VHE -ray and X-ray fluxes and spectra barely changed (within a factor 2), and the VHE -ray spectrum is a smooth continuation of the HE -ray spectrum in both years. The drop in optical polarization, along with the R-B color change, suggests that the optical-UV spectrum is strongly dominated by the AD and the BLR. In order to explore this further, a joint fit of the low-frequency SED and the optical spectropolarimetry is produced first to constrain the relative contributions of the jet synchrotron emission, the accretion-disk, and emission lines from the BLR as well as the jet emission-region parameters related to synchrotron emission (radiating relativistic electron distribution and magnetic field — see Sec. <ref>). The resulting parameters are then used in a second step to model the entire broadband SED, including X-rays and γ-rays, constraining additional parameters pertaining to the target photon fields for inverse-Compton scattering (Sec. <ref>). §.§ Modeling the Optical-UV photometry and spectropolarimetry Generally, the degree of polarization of the optical-UV jet synchrotron emission is diluted by the non-polarized, thermal contributions of the AD and the BLR. The model of <cit.> (see also App. <ref> for further details) derives the synchrotron state of a blazar assuming a single emission zone containing an electron distribution N_e (γ) = n_0 {[ (γ/γ_b)^-p_1· e^-γ_b/ γ_c for γ_ min≤γ≤γ_ b,; (γ/γ_b)^-p_2· e^-γ/γ_c for γ_ b≤γ_ max, ]. with electron spectral indices p_1 and p_2 where, in the slow-cooling regime, one expects p_2 = p_1 + 1. The characteristic Lorentz factors are in the range [γ_ min, γ_ max] with a break of a broken power-law spectrum at γ_b and an exponential cut-off at γ_ c. Synchrotron self-absorption effects are also considered. The model implements a geometrically thin, optically thick AD <cit.> around a non-rotating supermassive black hole of mass M_BH = 6 × 10^8 M_, which is within the range of previously obtained mass estimates, 5.71^+0.62_-0.58× 10^7 M_⊙ and 7 × 10^8 M_⊙, by <cit.> and <cit.>, respectively. For an AD accretion rate Ṁ_̇ḋ, the efficiency of converting potential energy into AD radiation is assumed to be ϵ = L_d/(Ṁ_̇ḋ c^2) = 1/12 <cit.>. The different states from 2021 to 2022 can be modeled with an unchanging AD. The synchrotron polarization was calculated following <cit.>. The degree of polarization depends on the geometry of the magnetic field in the jet. This is characterized by the scaling factor F_B between 0 and 1, with 1 representing perfectly ordered magnetic fields, whereas values less than 1 represent more tangled magnetic fields. The total degree of polarization is calculated as the sum of the synchrotron polarization and the unpolarized AD and BLR emissions. The emission lines can be modeled as Gaussian functions and the corresponding fluxes can be calculated relative to each other according to <cit.>. Their model did not include the Hα, C IV and Lyα lines. However, these were considered by <cit.> and <cit.> alongside the Mg II, Hγ, Hβ and Hα emission lines. The CIV, Mg II, Hγ and Hα emission lines are also included here, while emission lines are excluded if they are outside of the frequency regime with good spectropolarimetric or photometric data. The data averaged over 2021 and 2022 are modeled and shown in Fig. <ref>. In 2021, there are contributions by synchrotron, AD, and BLR radiation, while the data in 2022 requires dominating AD and BLR flux. The upper-right panel, showing the 2022 fit, suggests that the photometry data can be well fitted with only the AD and line components without the synchrotron contribution. Thus, the fit to the 2022 data marks a strict upper limit to the synchrotron flux contribution, in line with the above statement that the source-intrinsic polarization is consistent with zero in . The parameters obtained with the model fits are given in Tab. <ref>. For reference, the model application to all individual observations in 2021 is shown in appendix <ref>. The simultaneous modeling of the flux and polarization shows that the jet's synchrotron emission must have dropped considerably between 2021 and 2022, leaving behind the AD and the BLR as the almost sole flux contributors in the optical/UV regime. This underlines the unprecedented change that took place in . §.§ Broadband SED modeling In this section, first a fit of the broadband (IR – VHE γ-ray) SEDs of of 2021 and 2022 is attempted with a simple one-zone, steady-state leptonic model. For this purpose, the leptonic code of <cit.> is employed. See that paper for a detailed description of the model, which includes IC scattering of the co-spatially produced synchrotron emission (SSC) and external Compton scattering of the AD emission (IC/AD), modeled with the parameters derived in Sec. <ref>, and of the DT, modeled as an isotropic (in the AGN rest frame) blackbody photon field (IC/DT). The most relevant model parameters are thus: The injection luminosity of non-thermal electrons, L_ inj, the low- and high-energy cut-offs of the injected electron spectrum, γ_ min and γ_ max, the electron injection spectral index p_1, the size of the emission region, R, the co-moving magnetic field B, the bulk Lorentz factor Γ, the viewing angle θ_ obs (in the observer's frame), the distance of the emission region from the black hole, z_0, and the energy density and equivalent temperature of the external blackbody radiation field, u_ ext and T_ ext. The code evaluates self-consistently an equilibrium electron distribution, based on the balance between injection/acceleration, radiative cooling, and escape, evaluates the kinetic jet power L_e corresponding to the final electron population in the emission region and the Poynting flux power L_B, and calculates the ratio L_B/L_e = u_B/u_e which provides information on the magnetization of the jet plasma. The absorption through the extragalactic background light is evaluated with the model of <cit.>. Given the large number of parameters, a fit by eye is conducted, as a proper χ^2 minimization procedure is not feasible, and it would likely be degenerate in any case, since many of the model parameters are very poorly constrained. Fig. <ref>(middle) shows representative attempts of single-zone leptonic fits to the 2021 (red) and 2022 (blue) SEDs. The adopted model parameters are listed in Tab. <ref> and are chosen in such a way that the resulting radiating electron distribution is identical to the one resulting from the low-frequency SED and spectropolarimetry fit in Sec. <ref>. The distance of the emission region in 2021 is very poorly constrained, as a small contribution of IC/AD emission slightly improves the fit, but is not strictly required. An almost identical fit can be achieved with a much larger distance from the black hole, assuming that the DT radiation field has the same energy density at that distance. The soft HE -ray spectrum, implying a very soft electron spectrum, combined with Klein-Nishina effects at the highest energies, makes it very difficult to find a satisfactory fit to the spectral points in this single-zone scenario. For the 2022 low state, the HE -ray and non-thermal optical flux may be suppressed by using a smaller injection luminosity / acceleration efficiency and a significantly harder injection spectrum. In order to suppress any potential contribution of IC/AD, a distance z_0 ≫ 0.1pc from the black hole is required. The parameters adopted for the 2022 single-zone fit shown in Fig. <ref> (middle) have been chosen to keep as many parameters as possible unchanged between 2021 and 2022. However, if the dominant emission region in 2022 is indeed much further down the jet than in 2021, keeping the magnetic field and emission-region radius constant may not be plausible. A fit with a decreased magnetic field (such as B ∝ z_0^-1, as expected for a dominantly toroidal magnetic field) and larger emission region (such as R ∝ z_0 for a conical jet) leads to an almost identical fit to the X-ray through VHE γ-ray flux, but strongly suppresses the synchrotron emission in the radio through X-ray regime. Due to the difficulty of finding a satisfactory fit to the VHE spectrum in 2021, now the possibility of a two-zone model is explored, which is shown in Fig. <ref>(bottom). As the X-ray and VHE γ-ray spectra appear to have remained almost unchanged between 2021 and 2022, it seems natural to postulate a steady emission region responsible for the non-thermal emission in 2022, which may have been active also in 2021, with the additional emission region, closer to the central engine, that was only active in 2021. Therefore, the parameters of the far zone equal to the 2022 SED fit described above are kept, while a near zone is added with parameters listed in the last column of Tab. <ref>. This produces a satisfactory fit to the entire SED in 2021 (including the points) with physical conditions close to equipartition (bottom row in Tab. <ref>) in both emission regions. It should be noted that the B-field in the far-zone (2022) is poorly constrained and could easily be chosen to achieve exact equipartition. Absorption of rays in circum-nuclear radiation fields (accretion-disk, BLR) has not been accounted for in the model fits. It has been shown by <cit.> for strong-lined AGN in general and by <cit.> specifically for that VHE γ-rays are expected to be strongly attenuated if the emission region were located at sub-pc distances from the central engine. The fact that the VHE spectrum of does not show any signs of such internal γγ absorption <cit.> provides further support for the far-zone interpretation. This goes in line with the choice not to add an EC/BLR radiation component to the far-zone model. Such a component could plausibly be present in the near-zone / 2021 model. However, the IC/DT spectrum provides a satisfactory fit to the Fermi-LAT spectrum in 2021, and an IC/BLR component would not significantly contribute to the VHE spectrum due to Klein-Nishina effects. Therefore, it is preferred not to include additional parameters to the model. § DISCUSSION & CONCLUSIONS The relativistic jet of underwent a sudden and significant change around 2021 July 18. The HE -ray and optical fluxes observed with and ATOM, respectively, dropped to persistent low states, while the optical spectropolarimetry data obtained with SALT suggests a drop to a level compatible with no polarization in the source. The optical spectrum is thus fully explained by the AD and the BLR. Meanwhile, the VHE -ray and X-ray fluxes observed with H.E.S.S. and Swift-XRT, respectively, remained steady within a factor 2. This favors the two-zone interpretation, where separate emission regions were active before 2021 July 18 contributing to various degrees in all energy bands. Around this date, the primary zone close to the black hole that was responsible for most of the optical synchrotron and HE -ray emission, vanished leaving behind the secondary zone that has contributed strongly to the VHE -ray and X-ray domains. The secondary zone has been modeled as IC/DT at a few parsec from the black hole. In comparison to the two-zone interpretation in <cit.>, a softer electron distribution and a slightly higher γ_ min is required for the secondary zone described here owing to the different characteristics in the HE -ray domain. The new -ray state can also be reproduced with an IC/CMB model in the kpc-scale jet similar to <cit.>[There is a notable spectral difference in the HE -ray low-state spectrum in <cit.> compared to the one presented here, which suggests that the primary zone was active in the date set of <cit.>.] with the caveat that it cannot account for the X-ray spectrum measured with Swift. The comparison of the current VHE -ray spectrum with the discovery spectrum [see Fig. <ref>(top)] suggests that the secondary zone was already present in the old data, but that the VHE spectrum was also influenced by the primary zone allowing for the reproduction of that data with a single-zone model <cit.>. However, the two-zone explanation as outlined here would also explain the varying correlation patterns observed between the HE and VHE -ray bands <cit.>. The disappearance of the primary emission zone suggests two probable explanations. Either the inner jet has weakened considerably and is no longer capable of producing significant amounts of radiation, or the inner jet has swung away from the line-of-sight reducing the amount of Doppler beaming. Both scenarios may also explain the sudden termination of the flare that was ongoing in the HE and optical bands. In order to uncover the details of this event, elaborate modeling is required, which is beyond the scope of this paper. In either case, the disturbance should be transported through the jet and may eventually reach the parsec-scale jet. On these scales, the changes become observable in VLBI radio maps by a reduced total flux, by an outward motion of the core (if the jet weakens and becomes incapable of producing radio flux at the current core position) or a gradual swing of the jet structure. Publicly available radio data[Such as from Metsähovi, <https://www.metsahovi.fi/AGN/data/>, and ATCA, <https://www.narrabri.atnf.csiro.au/calibrators/>, among others.] show a flare occuring around the time of the disappearance of the primary emission region. This suggests a connection, but a detailed analysis is left to future work. Eventually, both scenarios could lead to a vanishing of the secondary emission zone, which could be uncovered in continuous MWL monitoring observations. We thank the referee for a constructive report that helped to improve the manuscript. The support of the Namibian authorities and of the University of Namibia in facilitating the construction and operation of H.E.S.S. is gratefully acknowledged, as is the support by the German Ministry for Education and Research (BMBF), the Max Planck Society, the German Research Foundation (DFG), the Helmholtz Association, the Alexander von Humboldt Foundation, the French Ministry of Higher Education, Research and Innovation, the Centre National de la Recherche Scientifique (CNRS/IN2P3 and CNRS/INSU), the Commissariat à l’énergie atomique et aux énergies alternatives (CEA), the U.K. Science and Technology Facilities Council (STFC), the Irish Research Council (IRC) and the Science Foundation Ireland (SFI), the Knut and Alice Wallenberg Foundation, the Polish Ministry of Education and Science, agreement no. 2021/WK/06, the South African Department of Science and Technology and National Research Foundation, the University of Namibia, the National Commission on Research, Science & Technology of Namibia (NCRST), the Austrian Federal Ministry of Education, Science and Research and the Austrian Science Fund (FWF), the Australian Research Council (ARC), the Japan Society for the Promotion of Science, the University of Amsterdam and the Science Committee of Armenia grant 21AG-1C085. We appreciate the excellent work of the technical support staff in Berlin, Zeuthen, Heidelberg, Palaiseau, Paris, Saclay, Tübingen and in Namibia in the construction and operation of the equipment. This work benefited from services provided by the H.E.S.S. Virtual Organisation, supported by the national resource providers of the EGI Federation. Some of the observations reported in this paper were obtained with the Southern African Large Telescope (SALT) under program 2021-2-LSP-001 (PI: D.A.H. Buckley). This research has made use of the NASA/IPAC Extragalactic Database (NED), which is funded by the National Aeronautics and Space Administration and operated by the California Institute of Technology. aasjournal § SUPPLEMENTARY OPTICAL-UV SED AND SPECTROPOLARIMETRY MODELING The plots in Fig. <ref> show the model fits to the optical-UV photometry and spectropolarimetry data for each of the SALT spectropolarimetry observing windows in 2021. Contemporaneous observations from the ATOM and Swift-UVOT telescopes were included in the fits, when detections were obtained on the same day as the SALT detections except for the SALT observations of 2021 April 6 (MJD 59310), 2021 May 9 (MJD 59343) and 2021 June 10 (MJD 59375) where the ATOM data of 2021 April 7 (MJD 59311), 2021 May 8 (MJD 59342) and 2021 June 9 (MJD 59374) in the R-band were included, respectively, as guide to the fits. The parameters obtained with the model fit are given in Tab. <ref> and the obtained line fluxes for each observation are listed in Tab. <ref>. The full spectropolarimetry results for each of the SALT observations are given in Tab. <ref>. The photometry fluxes decrease from 2021 April 6 to April 21 (MJD 59310 - 59325, excl. 2021 April 18, MJD 59322, for which ATOM data was not available, but a single very low Swift-UVOT data point was recorded). On 2021 May 9 (MJD 59343), there is a sudden increase in flux, decreasing again on 14 May 2021, and thereafter, the flux continued increasing until 2021 June 10 (MJD 59375). The photometry fluxes and degree of polarization decreased/increased alongside each other, as shown in Fig. <ref>. The ordering of the magnetic fields, does not indicate the presence of a shock; in a shock-in-jet scenario, one expects that the ordering of the magnetic field decreases/increases in correlation with the degree of polarization and flux <cit.>. Instead, the evolution of the ordering of the magnetic field shows no such correlation, which suggests the presence of turbulence and/or magnetic reconnection in the emission region as driver for the optical/UV variability. The χ^2_pol/ndf is the goodness of fit of the model to the spectropolarimetry data, where the number of degrees of freedom, ndf, is the amount of spectropolarimetry data points minus the amount of estimated parameters (equal to 10 in this model) minus 1. The goodness of the model fit to the few photometry data points is neglected and only applied to the abundent spectropolarimetry data, since fitting the prediction of the model's total polarization degree to the spectropolarimetry data is already dependent on the modeled total flux as well (where the modeled total flux was fitted to the photometry data). It does not indicate a good fit for all states since there might be contributions from components (such as emission lines) to the total flux (and thereby the total degree of polarization) that are missing or insufficiently accurately modeled. On the other hand, the inclusion of additional radiation components increases the number of free parameters in the model and therefore reduces its predictive power. Therefore, such additional components are not included. The Swift-UVOT data shows an unexpected trend of a variable profile for each state. This might be explained by prominent emission lines that have fluxes higher than the continuum. In the spectropolarimetry data, the dominant line was identified as H_γ from which the other lines were calculated relative to each other according to <cit.>. The remaining wavelength ranges of the emission lines are taken from <cit.>, when available. The wavelength ranges of H_δ, f_2934 and (at 3967Å) lines that are not given in <cit.> are estimated by eye to fit the photometric and spectropolarimetric data. =50mm § X-RAY SPECTRAL ANALYSIS Table <ref> provides an overview over the spectral results of the Swift-XRT analysis for both years.
http://arxiv.org/abs/2307.00332v1
20230701130348
Homomorphism of independent random variable convolution and matrix multiplication
[ "Yue Liu" ]
math.PR
[ "math.PR" ]
fzu]Yue Liuliuyue@fzu.edu.cn [fzu]School of Mathematics and Statistics, Fuzhou University, Fuzhou, 350116, China A map is given showing that convolutions of independent random variables over a finite group and matrix multiplications of doubly stochastic matrices are homomorphic. As an application, a short proof is given to the theorem that the limiting distributions of stochastic processes with stationary independent increments over a finite group are always uniform. 3pt AMS classification: 60G50, 15B51 convolution; doubly stochastic matrix; random walk; finite group § THE RESULT Convolution of random variables is a basic operation in probability theory, especially the convolutions of independent ones. In this paper, attention is restricted to convolutions of independent random variables over finite groups. Let G = {g_1,…,g_n} be a finite group. By Caley Theorem, let ϕ: G →Σ, g_k↦σ_k be an isomorphism, where σ_k is the left transition of G with the element g_k, thus Σ = {σ_1,…,σ_n} can be also regarded as a permutation subgroup of the symmetric group S_n over [n]. For each k, there is an order n permutation matrix P_σ_k=(p_ij^(k))_n× n corresponding to σ_k, where p_ij^(k) = {[ 1, i = σ_k(j),; 0, otherwise. ]. Let X be a random variable over G. Write p_k = [X = g_k], k =1,…,n. Define the convolution matrix of X, denoted by Con(X), as Con(X) = ∑_k=1^n p_k P_σ_k. It is easy to see that Con(X) is a doubly stochastic matrix. The following lemma shows that convolutions of independent random variables over a finite group and matrix multiplications of doubly stochastic matrices are homomorphic. Let X,Y be two independent random variables over a finite group G. Then Con(X· Y)=Con(X)Con(Y), where X· Y is the convolution of X and Y. The lemma can be verified by a direct computation. Random walk is a typical stochastic process. Random walks with stationary and independent increments over Euclidean spaces are called Lévy processes (see <cit.> for example). A more general framework for the study of such processes is proposed in <cit.>, where the random variables are assumed to be over any topological semigroups, rather than just Euclidean spaces. It was shown that the asymptotic behaviors of the processes over compact spaces and noncompact spaces are quite different (see Chapter 2 and 3 in <cit.>). As an application of Lemma <ref>, together with the well-knowm Perron Theorem (Theorem <ref>), we give a short proof to the following Theorem <ref>, which characterize the asymptotic behavior of random walks with stationary and independent increments over finite groups. (Perron Theorem, <cit.>) Let A ∈ M_n be irreducible and nonnegative, and suppose that n ≥ 2. Then * ρ(A)>0. * ρ(A) is an algebraically simple eigenvalue of A. * there is a unique real vector x=[x_i] such that Ax=ρ(A)x and x_1+⋯+x_n =1; this vector is positive. * there is a unique real vector y=[y_i] such that y^TA=ρ(A)y^T and x_1y_1+⋯+x_n y_n = 1; this vector is positive. Let X be a random variable. (X) is used to denote the distribution law of X. Let G={g_1,…,g_n} be a finite group. (X_m)_m≥ 1 be a G-valued stochastic process such that X_1 = ξ_1 and X_m+1 = X_mξ_m+1, m≥ 1, where (ξ_m)_m≥ 1 are i.i.d. random variables whose supports are G. Then lim_m→∞(X_m)= exists, and is convolution invariant, i.e., is a uniform distribution over G. Let P_σ_1,…, P_σ_n be the permutation matrices as in the definition of convolution matrices. Write P_σ_k = (p_ij^(k)). By the definitions, p_ij^(k) = 1 means g_k = g_j· g_i^-1. Thus, for every pair of i,j∈ [n], there exists only one k∈[n] such that p_ij^(k) = 1. Then P_σ_1,…, P_σ_n are linearly independent, and ∑_k=1^n P_σ_k is the matrix J whose entries are all 1. Since (ξ_m)_m≥ 1 are i.i.d. random variables, their convolution matrices are the same. Write A = Con(ξ_m). Since the support of ξ_m is G, every p_k = [ξ_m=g_k] is positive for k=1,…,n, which means A is a positive matrix. By Lemma <ref>, Con(X_m)=A^m. Since P_σ_1,…, P_σ_n are linearly independent, lim_m→∞(X_m) exists is equivalent to lim_m→∞ A^m exists, and the limiting distribution is uniquely determined by lim_m→∞ A^m. Since A is a positive doubly stochastic matrix, by Perron Theorem, the spectral radius ρ(A)=1 is an algebraically simple eigenvalue, and x =1/n [1,…,1]^T, y^T = [1,…,1] are the unique left and right eigenvectors corresponding to ρ(A) as described in Theorem <ref>, respectively. Then lim_m→∞ A^ m =x· y^T= 1/nJ = 1/nP_σ_1+⋯+1/nP_σ_n, yielding that the limiting distribution of (X_m)_m≥ 1 is the uniform distribution. § ACKNOWLEDGMENTS The author would like to thank Prof. Jian Wang of Funjian Normal University for the helpful discussions and suggestions, especially for providing the background of random walks. elsarticle-num-names
http://arxiv.org/abs/2307.01904v1
20230704202402
Effective Auxiliary Variables via Structured Reencoding
[ "Andrew Haberlandt", "Harrison Green", "Marijn J. H. Heule" ]
cs.LO
[ "cs.LO" ]
[ [ August 1, 2023 ================== [1]Authors contributed equally. Extended resolution shows that auxiliary variables are very powerful in theory. However, attempts to exploit this potential in practice have had limited success. One reasonably effective method in this regard is bounded variable addition (BVA), which automatically reencodes formulas by introducing new variables and eliminating clauses, often significantly reducing formula size. We find motivating examples suggesting that the performance improvement caused by BVA stems not only from this size reduction but also from the introduction of effective auxiliary variables. Analyzing specific packing-coloring instances, we discover that BVA is fragile with respect to formula randomization, relying on variable order to break ties. With this understanding, we augment BVA with a heuristic for breaking ties in a structured way. We evaluate our new preprocessing technique, Structured BVA (SBVA), on more than 29 000 formulas from previous SAT competitions and show that it is robust to randomization. In a simulated competition setting, our implementation outperforms BVA on both randomized and original formulas, and appears to be well-suited for certain families of formulas. § INTRODUCTION Pre-processing techniques that introduce and eliminate auxiliary variables have been shown to be helpful both in theory and practice. Theoretically, auxiliary variables lift the power of solvers from the resolution proof system to Extended Resolution (ER) <cit.>. In practice, efforts to exploit this full power of ER have had limited success; however, auxiliary variables have been used to reencode formulas in a way that drastically reduces their size <cit.>, often leading to a decreased solve time. In this work we show that this speedup may not be caused entirely by the reduction in formula size, but by the introduction of certain effective auxiliary variables. A very powerful pre-processing technique is Bounded Variable Elimination (BVE) <cit.>. As its name suggests, it eliminates a variable x by resolving each clause containing a literal x with every clause containing literal x. Importantly, BVE only performs such an elimination if it helps reduce the formula size (measured as the number of clauses plus the number of variables). A pre-processing technique that introduces new auxiliary variables is Bounded Variable Addition (BVA) <cit.>, which is the focus of this article. It is well known that the introduction of auxiliary variables is crucial for many succinct encodings (e.g., the Tseitin transformation <cit.>, or cardinality constraints <cit.>). Following the intuition of BVE, BVA will only introduce a new variable if it can eliminate a larger number of clauses than it adds. Auxiliary variables may not only be useful to reduce the size of a formula, but they can also capture some semantic meaning about the underlying problem to encode, as we detail in <Ref>. As a case study, we consider a recent encoding used by Subercaseaux and Heule for computing the packing chromatic number of the infinite square grid via SAT solving <cit.>. In their work, BVA was found to generate auxiliary variables that represented clusters of neighboring vertices of the grid. The encoding resulting from running BVA on a direct encoding of the problem inspired a more efficient encoding, by suggesting the usefulness of having auxiliary variables capturing clusters of vertices. In this paper, we offer new insight into this “meaningful variables” phenomenon, which we believe can generalize to other problems as well. Furthermore, even though the reencoding resulting from BVA suggested meaningful new variables for the packing coloring problem, it was not as effective as manually designing a more structured encoding based on some of those variables. We take this as motivation to identify shortcomings of BVA and improve upon its design. In general, on problems where BVA is effective, the effect tends to be extreme. BVA is able to reduce the number of clauses by a × 10 factor or more, improving solve time by orders of magnitude. However, in this paper, we find that this reduction in solve time is highly sensitive to randomly scrambling the formula (even when controlling for how CDCL solvers are generally sensitive to this form of randomization <cit.>). In particular, randomizing the order of variables and clauses prior to BVA substantially reduces the positive effect of BVA on solve time, despite maintaining the same overall reduction in formula size. Using the packing k-color problem, we show that the effectiveness of BVA relies on the introduction of a few specific variables that account for only a small fraction of the reduction in formula size. Moreover, we identify that the lack of effective tie-breaking in BVA is the cause of this high sensitivity to randomization. Inspired by these new insights into the behavior of BVA, we present (<Ref>), a version augmented with a tie-breaking heuristic that enables it to introduce better auxiliary variables at each step, even when the original formula is randomized. Our heuristic is based on a connectivity measure between variables in the incidence graph of CNF formulas, which is preserved under randomization of the formula. As a result, is able to identify effective auxiliary variables even when the original formula is scrambled. We evaluate our implementation by running it on more than 29 000 formulas from the Global Benchmark Database <cit.>. Experimental results, presented in <Ref>, demonstrate that our approach outperforms the original implementation of BVA. In summary, the main contributions of this article are: * We offer new insight into the behavior of BVA, by exhibiting its ability to introduce effective auxiliary variables and showing its sensitivity to formula randomization. * We design , a heuristic-guided form of BVA, that introduces new variables in a way that is robust to randomization. * We perform a large-scale evaluation of both BVA and on benchmark problems from the SAT Competition and study their behavior on different families of instances. * We release an open-source implementation of that supports proof logging. § PRELIMINARIES A literal is either a variable x, or its negation (x). A propositional formula in conjunctive normal form (CNF) is a conjunction of clauses, which are themselves disjunctions of literals. An assignment is a mapping from variables to truth values. A positive (negative) literal is true if the corresponding variable is assigned to true (false, respectively). An assignment satisfies a clause if at least one of its literals is true, and we say an formula is satisfied if all of its clauses are. A formula is satisfiable (SAT) if there exists an assignment that satisfies it, or unsatisfiable (UNSAT) otherwise. For example, the formula (x ∨y) ∧ (x∨ z) is made up of two clauses, (x ∨y) and (x∨ z), each with two literals. This formula is satisfiable, since the assignment of x and z to true and y to false satisfies it. There can be many equivalent ways of encoding a problem into CNF, differing in the meaning assigned to individual variables. Problems often have a direct encoding, in which variables are assigned for each individual decision element present in a problem. For example, in a direct encoding of graph coloring, there are k|V| variables, where each v_i, c represents whether node i has color c and k is the number of colors. Although direct encodings are often the most intuitive, more efficient encodings are known for a wide variety of problems. These encodings often add auxiliary variables to the formula, which capture properties about a group of variables. One of the simplest examples is an AtMostOne(x_1, …, x_n) constraint, which requires that at most one of the variables x_1, …, x_n is true. Without adding auxiliary variables, this constraint requires Θ(n^2) clauses, which are typically binary clauses between every pair of variables <cit.>. However, with the introduction of auxiliary variables, this constraint can be encoded in a linear number of clauses and variables as follows <cit.>: AtMostOne(x_1, …, x_n) = AtMostOne(x_1, x_2, x_3, y) AtMostOne(x_4, …, x_n, y) where the pairwise encoding is used for AtMostOne(x_1, …, x_n) where n < 4. The split AtMostOne constraints require that at most one of {x_1, x_2, x_3} is true, and at most one of {x_4, …, x_n} is true, respectively. The added auxiliary variable y prevents a variable in both of the groups from being true. The auxiliary variable y is forced false if any of x_1, x_2, or x_3 are true, and forced true if any of x_4, …, x_n are false. If a literal from both groups is true, the auxiliary variable y prevents the formula from being satisfiable. Starting from the original formula, the Extended Resolution proof allows only two simple rules: * Resolution: Given clauses C ∨ p and D ∨p, add the clause C ∨ D to the proof. * Extension: Define a new variable x as x ↔a∨b, where a and b are literals in the current proof. Add the clauses x ∨ a, x ∨ b, and x∨a∨b to the proof. In resolution, the clause C ∨ D is implied by the first two clauses, resulting in a logically equivalent formula. In extension, however, the introduction of a new variable x is not implied by the original clauses, and results in a formula that preserves satisfiability and is only logically equivalent over the original variables. Using the extension rule, new variables can be defined in terms of existing variables. The original rule defined by Tseitin <cit.> only allows for definitions of the form x ↔a∨b, the construction for which is given in the definition above. However, the extension rule can be applied repeatedly to construct variables corresponding to arbitrary propositional formulas over the original variables. This flexibility is key to the success of Extended Resolution, but it provides no guidance on how these extensions should be chosen. Bounded Variable Addition (BVA) <cit.> is a pre-processing technique that reduces the number of clauses in a formula by adding new variables. Each application of BVA first identifies a “grid” of clauses, as shown in <ref>. Then, BVA adds a new variable and clauses which resolve together to generate all clauses in the grid. Collectively, for a formula F, the grid constitutes a set of literals and a set of partial clauses , such that ∀ l ∈, ∀ C ∈ : (l ∨ C) ∈ F. While bounded-variable elimination eliminates variables by replacing all the clauses containing a variable with their resolvents, BVA tries to identify grids of resolvents which can be generated by the introduction of a new variable and a smaller number of clauses. These grids of clauses capture the fact that either the entirety of must be satisfied, or the entirety of must be satisfied. More precisely, F (⋀_l ∈l) (⋀_C ∈C). By identifying these grids, BVA replaces || · || clauses with a single, new variable x and || + || clauses (which can generate the original set by resolution on x in {x C | C ∈}×{x l | l ∈}). Therefore, if || · || > || + || + 1, then this replacement results in a reduction in formula size. Note that a BVA replacement step can be simulated by extended resolution: First, add the definition x ↔AND(). In the example above, this means adding the clauses x ab, x a, and x b. Afterwards, the clauses x p q, x p r, x r s, and x t can each be derived using || resolution steps. For example, to derive x p q, resolve x ab with a p q and the result with b p q. Afterward the clauses used in these resolution steps can be deleted. Manthey et al. <cit.> propose a greedy algorithm to identify these grids of resolvents that prioritizes literals which appear in many clauses called SimpleBoundedVariableAddition. An abbreviated version of a single variable addition in this algorithm is shown in <ref>. Each identified grid starts from the most frequently occurring literal l in the current formula. The grid starts with dimension 1 ×| F_l|, where F_l is the set of clauses containing l. From there, the algorithm searches for a literal l_max to add to the grid, which maximizes the number of remaining resolvents. To identify the literal l_max, the BVA algorithm looks for the literal for which (l_max C) appears in F for the greatest number of clauses C ∈ (line 4). At each step, a literal is added to (line 6), and clauses may be removed from (line 7). The grid will continue to shrink until the addition of a literal to the grid would not increase the size of the formula reduction (line 5), as shown in <ref>. In BVA, variable additions are performed as long as there is a reduction in formula size. The entirety of <ref> is repeated using different literals for l to construct multiple new auxiliary variables. Specifically, the original implementation defines a priority queue of literals ordered by the number of clauses each literal appears in. Our adaptation of BVA (<ref>) reuses this implementation detail. These repeated applications of BVA enable the algorithm to achieve large reductions in formula size, and auxiliary variables added in previous steps can even be re-used in future variable introductions. § MOTIVATING EXAMPLE To motivate the need for a heuristic-guided version of BVA, we will first demonstrate the effect of randomization on existing implementations of BVA, and the disproportionate impact of a few critical variable additions. §.§ Packing Colorings BVA has been shown to be effective on the grid packing k-coloring problems, whose constraints are based on coloring a circular grid of tiles, shown in <ref>. Unlike a standard graph coloring problem, each color in the packing k-coloring problem is associated with a integer distance from 1 to k. When coloring the grid, two tiles can only have the same color if the taxicab distance between them is greater than the color number. For example, two tiles of color 3 must have at least 3 tiles between them, while color 1 tiles cannot be adjacent. The D_r, k problem asks whether the grid of radius r can be colored with k colors. The direct encoding for this problem consists of variables v_i, c, denoting that grid location i has color c. There are three types of clauses <cit.>: * At-Least-One-Color: ∀ i, (v_i,1∨ v_i,2∨…∨ v_i,k). Each tile must be colored with a color between 1 and k. * At-Most-One-Distance: ∀ i, j, c : d(i, j) ≤ c, (v_i,c∨v_j,c). If the distance between two tiles is less than or equal to the color, they cannot both have that color. * Center-Clause: v_(0, 0), c for a fixed color c. This is a symmetry-breaking optimization <cit.>, which has no effect on BVA since it ignores unit clauses. Previous work <cit.> showed that BVA can reduce the size of such formulas by a factor of 4, and induces more than a × 4 speedup on the larger instance (D_6, 11). They found that auxiliary variables capture regions of grid tiles within a particular color, i.e. the grid replacement happens entirely within the binary at-most-one-distance constraints. We visualize the variables introduced by BVA on D_5, 10, the packing k-coloring problem with radius 5 and 10 colors. In the first row of <ref>, each of the four plots introduces a new auxiliary variable x for one of the colors c ∈{1, …, 10} (denoted above each plot). All the binary clauses for color c (At-Most-One-Distance clauses) of the form (v_i,c∨v_j,c) with grid location i corresponding to a green square and grid location j corresponding to a yellow square will be replaced with a smaller number of clauses: (x ∨v_i,c) for each green location i and (x∨v_j,c) for each yellow location j. §.§ Negative Impact of Randomization We discovered that randomizing packing k-coloring formulas prior to running BVA significantly increases the resulting solve time. Furthermore, the variables added by BVA after randomization fail to capture the clustered regions within the problem's 2D space that are identified without randomization. <ref> shows the first four variable additions performed by BVA on D_5, 10. The effect is especially noticeable in the first few variable additions. The structure of these variables is more than a visual artifact. Running BVA to completion produces a formula that requires more than ×2 the solve time in CaDiCaL compared with running BVA on the original formula, despite a similar reduction in formula size (see <ref>). We found that the first variable added by BVA in D_5, 10 had a disproportionate impact on the solve time of the formula. We isolated the effect of a single replacement by allowing BVA to only produce one new auxiliary variable and then evaluating the solve time of the resulting formula. <ref> shows that a single variable addition (outlined in black in <ref>) can achieve a × 6 speedup over the original formula and that the impact of this single addition is also substantially affected by randomization. Although randomization before BVA did not affect the size reduction of the first variable addition, the randomized formula with a single BVA step is 2 times slower compared to the original formula. The importance of individual variable additions and their sensitivity to randomization suggests that BVA's impact is derived not only from the size reduction but from the structure of the variable additions. §.§ Ties in Bounded Variable Addition The reason for the BVA's sensitivity to randomization is due to a detail in the way implementations treats ties between literals. As described in <ref>, the algorithm chooses the literal that maximizes the number of remaining resolvents to be eliminated (<ref>, line 4). If there is a tie between two literals, the original algorithm does not specify which literal should be used. The original implementation provided by <cit.> breaks ties using the variable number in the original formula. <ref> shows how breaking ties differently leads to different variable additions. Note that since BVA eliminates the clauses in the grid when adding a variable, it is not possible for multiple applications of BVA to eventually add both variables resulting from a tie. In the D_5, 10 packing problem, colors 9 and 10 are almost fully connected; coloring a tile with color 10 means that no tile within 10 spaces of it can also be colored 10. When BVA creates a variable for these pairwise constraints, all of the clauses are tied for the number of preserved resolvents (since every pair of color-10 variables appears in a at-most-one-distance clause). Since the original implementation used variable number to break ties and ordered variables from top-left to bottom-right, the variable additions it produces follow that structured pattern. However, when the variable order is randomized, the resulting region lacks structure and the formula takes longer to solve. §.§ Recovering Structure After randomization, BVA struggles to introduce variables that represent coherent clusters of tiles. However, we note that the original structure is still captured by the original formula as a whole. For example, in the D_5, 10 packing problem, two variables representing color 1, v_i,1 and v_j,1, only share a pairwise constraint if they are adjacent (i.e. if i and j represent adjacent tiles). If we could recover a generic metric for how close variables are to each other (e.g. in the 2D space of D_5, 10), this metric could be used to help BVA recover structure in problems where the original variable order does not result in structured variable additions. The intuition for our heuristic, which is detailed in <ref>, is based on the structure observed in the packing problem. We notice that while variables in color 10 are indistinguishable after randomization (i.e. all fully connected with At-Most-One-Distance clauses), the variables in color 1 preserve the structure of the original problem: these variables only share At-Most-One-Distance clauses with their immediate neighbors. Additionally, variables for the same tile location but different colors are all linked by an At-Least-One-Color constraint, even after randomization. One could deduce which variables in color 10 are neighbors by looking at the connectivity of the equivalent tile positions in color 1. Specifically this requires 3 “hops” through clauses: starting at a variable v_i,10 in color 10, we find v_i,1 in color 1 (via an At-Least-One-Color clause), then find v_j,1 in color 1 (via an At-Most-One-Distance clause), and finally find v_j,10 in color 10 (via an At-Least-One-Color clause); the full path is v_i,10→ v_i,1→ v_j,1→ v_j,10. While it is possible to construct an algorithm to recover this structure specifically for the k-coloring packing problem, we generalize this concept by counting paths. Specifically, between two variables v_i,10 and v_j,10 in color 10 there are many paths of length 3: for example v_i,10→ v_a,10→ v_b,10→ v_j,10 (using only At-Most-One-Distance clauses). However, only adjacent variables in color 10 will have the additional path that goes through color 1: v_i,10→ v_i,1→ v_j,1→ v_j,10. For a given variable in color 10, it will have the most 3-hop paths to variables of the immediately adjacent grid tiles. We formalize this intuition in <ref>. § STRUCTURED REENCODING In this section we define our implementation of a heuristic for breaking ties during variable selection in BVA. While our heuristic was initially designed to mitigate the detrimental effects of randomization on the packing coloring problems, we found that it is also effective for other problems, even ones which have not been randomized. In <ref> we show that our heuristic-guided BVA is effective on a wide variety of problems and offers a significant improvement to solve time for certain families of formulas. Our heuristic is based on the intuition that BVA should prefer to break ties by adding variables that are close to one another. In <ref>, we noticed that in the k-coloring problem, there are some paths between variables that are only present when variables are close in the problem's 2-D space. The variable incidence graph compactly captures this notion of variable adjacency. Here we formally define a heuristic for “variable distance” based on the number of paths between pairs of variables in the variable incidence graph. The Variable Incidence Graph (VIG) of a formula F is an undirected graph G = (V, E) where V is the set of variables in F, and E contains an edge between variables if they appear in a clause together. The weight on an edge (v_1, v_2) is the number of clauses in which v_1 and v_2 appear together: w(v_1, v_2) = | {C ∈ F : {v_1, v_2}⊂Vars(C) }| We measure variable distance by counting the number of distinct paths between two variables (i.e. using different intermediate variables or clauses). Edges in the VIG indicate the number of clauses shared by pairs of variables. For a given sequence of variables (v_1, v_2, ..., v_n) the number of distinct paths through different combinations of clauses is given by w(v_1, v_2) · w(v_2, v_3) ·…· w(v_n-1, v_n). Since edge weights are multiplicative along a path, the number of different paths of length n through the VIG is given by A^n, where A is the adjacency matrix of the VIG. Since we identified that adjacent tiles in the packing problem have more length-3 paths between them, we define a simple heuristic that counts the number of paths of length 3 in the VIG, which we call the 3-hop heuristic. The 3-hop heuristic H(x, y) is defined as the number of distinct paths of length 3 between two variables x and y in the VIG. Two paths are distinct if they travel through a different sequence of variables or clauses. Given the VIG adjacency matrix A, the 3-hop heuristic can be computed as H(x, y) = (A^3)_x,y. We modify <ref> to use our heuristic as a tie-breaker, specifically augmenting the computation of in line 4: when multiple values of l_m have the same number of remaining resolvents, we choose the literal l_m with the highest value of H(l, l_m). Our implementation of BVA, called , is written in C++ and uses the library for sparse matrix operations. It is capable of generating DRAT proofs describing the sequence of variable additions and clause deletions and thus could be used with a solver to generate certificates of unsatisfiability. In <ref>, we show the value of H(x, y) in D_5, 10 for variables representing color 10 between a variable of interest (outlined in black) and all other variables of color 10. Grid tiles that are closer in the 2-D space of the packing k-coloring problem have more 3-hop paths between them and thus have a higher heuristic value. Using our heuristic on a randomized formula for the packing problem, we recover variables that capture the spatial structure of the problem. In the third row of <ref>, we show the first 4 variables added by , which cluster variables together using the notion of distance that is inherent in the original problem. Furthermore, we find that applying this heuristic to the packing problem results in formulas that solve much faster than BVA on a randomized formula (<ref>). § EXPERIMENTAL DETAILS We evaluated BVA on more than 29,000 formulas from the Global Benchmark Database <cit.> in order to study the effects of randomization and our heuristic on BVA. In this section, we discuss the experimental setup and provide a brief overview of the results. In <ref>, we analyze the results in more detail and discuss families of formulas that were significantly impacted by BVA and/or . We constructed three solver configurations that use BVA in different ways. All three variants take a formula, (optionally) randomize it with , run BVA (with or without heuristic), and pass it to CaDiCaL to solve. For comparison, we include a baseline variant that does not run BVA. Since the particular ordering of clauses and variables in a formula can impact solver performance <cit.>, we also use the tool immediately prior to running CaDiCaL in all configurations. To mitigate this variance, we run the entire sequence three times for each configuration, averaging across the three runs. The list of configurations is shown in <ref>. Note that all four configurations have randomization applied prior to solving with CaDiCaL but only and have randomization applied prior to BVA/. We evaluated our variants on 29 402 benchmark instances (downloaded on February 20, 2023) from the Global Benchmark Database (GBD) <cit.>. We also report results against the Anniversary Track from the SAT Competition 2022 <cit.> (labeled as “ANNI-2022” within this paper) which is included as a subset in the GBD (5355 benchmarks). All experiments were performed on the Bridges-2 system at the Pittsburgh Supercomputing Center <cit.> on nodes with 128 cores and 256 GB RAM. We compare the four configurations in a simulated competition setting with a fixed time limit of 5 000 seconds per benchmark. The total time is computed as the sum of BVA and CaDiCaL runtimes (scranfilize time is not counted towards this limit). As noted by Manthey et al. <cit.>, BVA can be quite expensive, even on formulas that do not reduce significantly. We allow all versions of BVA to run for 200 seconds and if it has not terminated by then, we instead run the original formula with CaDiCaL. On our full benchmark, BVA terminates within 200 seconds on approximately 95% of problems. We ran 128 instances in parallel per node, leaving approximately 2GB of memory (for reference, in the SAT Competition 2022 <cit.>, solvers were allotted 128GB) for each BVA/CaDiCaL process. This limit is enough for most formulas, but in cases where BVA runs out of memory, we instead run the original formula in CaDiCaL. In both cases (timeout and out-of-memory), the already-used time is added to the subsequent solve time of the original formula. This setup provides a fair comparison as BVA could be realistically configured this way in a competition. We report the PAR-2 scores and number of formulas solved for each variant in <ref>. The PAR-2 score is computed as the total time it took to solve an instance (BVA runtime + CaDiCaL runtime) or twice the time limit if the formula was not solved within 5 000 seconds. We compute the PAR-2 score individually for each run and average across the three runs of a given formula. A formula is marked as solved in <ref> if any of the three runs solved it within the time limit. Additionally, the set of formulas over which PAR-2 is computed consists of instances where at least one of the four configurations was able to solve it. Instances that were not solved by any configuration were not included in the PAR-2 score. Adding these entirely unsolved instances would not change the number solved and would simply scale the PAR-2 scores equally for all configurations. § RESULTS AND ANALYSIS This section takes a closer look at the performance of , , and in comparison to the configuration. We explore both the effects of randomization and the effects of the heuristic, in general and on specific families of formulas. Specifically, we explore the following questions: * Does compression factor correlate with solve time in the context of BVA? * What is the effect of randomization on the performance of BVA? * Can our heuristic outperform randomized BVA? * How does the performance of our heuristic vary across different families of formulas? We address these questions directly in the following paragraphs: As demonstrated in <ref>, even small reductions from BVA can have a large impact on solve time. For example, on the packing k-coloring problem, a single added variable can reduce solve time by over a factor of 5 if picked correctly (<ref>). We compute the compression factor of a formula as the ratio of the formula size before to the size after running BVA. For example, a factor of 1 indicates no reduction, a factor of 2 indicates the formula was reduced to 50% the original size, and a factor of 10 indicates the formula was reduced to 10% of the original size. Similarly, we compute the speedup as the ratio of solve time to solve time (values below 1 indicate the formula was solved faster). In <ref> we plot the speedup of against the compression factor for every problem in the benchmark. Equivalent figures for and look similar and are available in the appendix (<ref> and <ref>). For formulas that could be greatly reduced, there is an observable trend towards a greater speedup. However, for small reductions, the speedup is much more variable. In some cases, even formulas that are reduced to less than 10% of the original size may be slowed down by BVA. With , 60% of formulas had a compression factor greater than 1, 40% had a factor greater than 2, and 4% had a factor greater than 10. Randomization has a negative effect on the performance of BVA; in all benchmark groups, solved fewer formulas and has a higher PAR-2 score than . Interestingly, this effect appears to be entirely due to the structure of the resulting formula and not the resulting size of the formula. In <ref>, we plot the relative solve times of formulas from the ANNI-2022 benchmark for and . While there is almost no difference in the reduction sizes of the formulas produced by and (formula sizes differ by less than 1.5%), a number of formulas were substantially slowed down (<ref>). Note that in these plots, randomization prior to BVA is more detrimental for UNSAT formulas and introduces a lot of variance to SAT formulas. While randomization has a negative effect on the original implementation of BVA, we observe that our heuristic-guided BVA is robust to this effect. Despite being provided with randomized formulas, it is able to generate high quality variable additions and recover all of the performance loss of , even surpassing in many cases on number of problems solved and PAR-2 score. We believe the slight performance improvement over in several cases is due to the presence of “pre-randomized” formulas in the benchmark; in these cases already suffers the effects of randomization while is able to recover the original structure of the problem. In <ref>, we compare the relative solve times of formulas from the ANNI-2022 benchmark for and . As in the previous section, the formula sizes between the two variants differs by less than 1.5% on average. However, is able to speed up many formulas, especially UNSAT instances. We found that both the original implementation of BVA and our heuristic-guided version have strong effects for specific families of formulas. In <ref>, we plot the relative performance of the four configurations on 10 formula families for which BVA was effective. For these plots we allow BVA/to run for the full 5 000 seconds and consider only the CaDiCaL solve time in the plots in order to understand the effectiveness of the formula rather than the speed of BVA. In this section, we briefly describe some of the families where BVA was most effective. Pigeonhole formulas try to uniquely assign n pigeons to m holes. Like the packing k-coloring problem, these formulas consist primarily of AtLeastOne constraints (a pigeon must be in at least one hole) and pairwise AtMostOne constraints (two pigeons cannot share a hole). Our benchmark also contains variants of this problem, e.g. allowing multiple pigeons in a hole. These formulas are difficult for SAT solvers due to the number of possible permutations. We found that was quite effective for UNSAT instances of pigeonhole problems (note that SAT instances of pigeonhole problems are trivial), able to solve new instances that the other three configurations could not solve. Interestingly, we found that these newly solved problems consist mainly of pre-shuffled pigeonhole problems. A full list of solved UNSAT pigeonhole problems is provided in <ref>. Other pigeonhole-like families in the dataset include PHNF (Pigeonhole Normal Form) <cit.> and FPGA-Routing <cit.>, which consists of problems generated by combining two pigeonhole problems. All forms of BVA were very effective on these problems compared to . Petri nets are a model of concurrent computation that consists of places and transitions <cit.>. They are used to model a variety of systems, including chemical reactions, manufacturing processes, and computer programs. The Petri Net Concurrency family consists of formulas that encode the satisfiability of Petri nets. All three configurations of BVA are able to generate very compact encodings for these formulas, with an average compression factor of more than 20. The bioinformatics family consists of problems that encode genetic evolutionary tree computations into SAT <cit.>. As noted by the authors of the original BVA paper, these problems are also reduced significantly with BVA. For the problems in this family, we found that the average compression factor was more than 7 for all three BVA configurations, i.e. the formulas were reduced to less than 15% total size on average. We found BVA to be useful in several families of formulas derived from 2-D games. The puzzle family consists of formulas that encode the satisfiability of a sliding-block puzzle and were contributed by van der Grinten to SAT Comp 2017. The rooks family asks if it is possible to place N+1 rooks on a N × N chessboard such that no two rooks can attack each other <cit.>. The battleship family consists of problems that are derived from the battleship guessing game and were contributed by Skvortsov to SAT Comp 2011. BVA was effective in all three families and was especially effective for the puzzle and battleship families. The antibandwidth <cit.> and spectrum repacking <cit.> formulas are both related to assigning radio stations to channels. Specifically, the antibandwidth family asks if it possible to assign a given set of stations to a given set of channels such that the difference in channel between any two stations is at least k. Similarly, the spectrum repacking family asks if it is possible to reassign a given set of stations into a smaller set of channels, taking into account physical distances between stations and the bandwidth of each channel. All configurations of BVA were effective on these problems. § CONCLUSION Bounded Variable Addition is surprisingly effective at reducing the size of formulas and improving solve time by introducing auxiliary variables. We discovered that this speedup is caused not only by the reduction in formula size but also the introduction of certain effective auxiliary variables. We found that the original implementation was sensitive to randomization and proposed a new heuristic-guided implementation, , that is robust to this effect. In a competition-style benchmark, we show that using SBVA resulted in the most formulas solved in every category, outperforming both BVA and the baseline (no preprocessor). Additionally, SBVA was extremely effective on certain families of formulas, demonstrating that auxiliary variables can be useful in practice if they are chosen carefully. APPENDIX § APPENDIX §.§ Reduction Size vs. Solve Time §.§ Performance on Pigeonhole Problems §.§ Performance on Bioinformatics Problems §.§ Performance per Family Legend: * : blue * : orange * : green * : red
http://arxiv.org/abs/2307.02991v1
20230706134429
ContainerGym: A Real-World Reinforcement Learning Benchmark for Resource Allocation
[ "Abhijeet Pendyala", "Justin Dettmer", "Tobias Glasmachers", "Asma Atamna" ]
cs.LG
[ "cs.LG" ]
1]Abhijeet Pendyala 1]Justin Dettmer 1]Tobias Glasmachers 1]Asma Atamna (envelope) [1]Ruhr-University Bochum, Bochum, Germany firstname.lastname@ini.rub.de : A Real-World Reinforcement Learning Benchmark for Resource Allocation [ ======================================================================= We present , a benchmark for reinforcement learning inspired by a real-world industrial resource allocation task. The proposed benchmark encodes a range of challenges commonly encountered in real-world sequential decision making problems, such as uncertainty. It can be configured to instantiate problems of varying degrees of difficulty, e.g., in terms of variable dimensionality. Our benchmark differs from other reinforcement learning benchmarks, including the ones aiming to encode real-world difficulties, in that it is directly derived from a real-world industrial problem, which underwent minimal simplification and streamlining. It is sufficiently versatile to evaluate reinforcement learning algorithms on any real-world problem that fits our resource allocation framework. We provide results of standard baseline methods. Going beyond the usual training reward curves, our results and the statistical tools used to interpret them allow to highlight interesting limitations of well-known deep reinforcement learning algorithms, namely PPO, TRPO and DQN. § INTRODUCTION Supervised learning has long made its way into many industries, but industrial applications of (deep) reinforcement learning (RL) are significantly rare. This may be for many reasons, like the focus on impactful RL success stories in the area of games, a lower degree of technology readiness, and a lack of industrial RL benchmark problems. A RL agent learns by taking sequential actions in its environment, observing the state of the environment, and receiving rewards <cit.>. Reinforcement learning aims to fulfill the enticing promise of training a smart agent that solves a complex task through trial-and-error interactions with the environment, without specifying how the goal will be achieved. Great strides have been made in this direction, also in the real world, with notable applications in domains like robotics <cit.>, autonomous driving <cit.>, and control problems such as optimizing the power efficiency of data centers <cit.>, control of nuclear fusion reactors <cit.>, and optimizing gas turbines <cit.>. Yet, the accelerated progress in these areas has been fueled by making agents play in virtual gaming environments such as Atari 2600 games <cit.>, the game of GO <cit.>, and complex video games like Starcraft II <cit.>. These games provided sufficiently challenging environments to quickly test new algorithms and ideas, and gave rise to a suite of RL benchmark environments. The use of such environments to benchmark agents for industrial deployment comes with certain drawbacks. For instance, the environments either may not be challenging enough for the state-of-the-art algorithms (low dimensionality of state and action spaces) or require significant computational resources to solve. Industrial problems deviate widely from games in many further properties. Primarily, exploration in real-world systems often has strong safety constraints, and constitutes a balancing act between maximizing reward with good actions which are often sparse, and minimizing potentially severe consequences from bad actions. This is in contrast to training on a gaming environment, where the impact of a single action is often smaller, and the repercussions of poor decisions accumulate slowly over time. In addition, underlying dynamics of gaming environments—several of which are near-deterministic—may not reflect the stochasticity of a real industrial system. Finally, the available environments may have a tedious setup procedure with restrictive licensing and dependencies on closed-source binaries. To address these issues, we present , an open-source real-world benchmark environment for RL algorithms. It is adapted from a digital twin of a high throughput processing industry. Our concrete use-case comes from a waste sorting application. Our benchmark focuses on two phenomena of general interest: First, a stochastic model for a resource-filling process, where certain material is being accumulated in multiple storage containers. Second, a model for a resource transforming system, which takes in the material from these containers, and transforms it for further post-processing downstream. The processing units are a scarce resource, since they are large and expensive, and due to limited options of conveyor belt layout, only a small number can be used per plant. is not intended to be a perfect replica of a real system but serves the same hardness and complexity. The search for an optimal sequence of actions in is akin to solving a dynamic resource allocation problem for the resource-transforming system, while also learning an optimal control behavior of the resource-filling process. In addition, the complexity of the environment is customizable. This allows testing the limitations of any given learning algorithm. This work aims to enable RL practitioners to quickly and reliably test their learning agents on an environment encoding real-world dynamics. The paper is arranged as follows. Section <ref> discusses the relevant literature and motivates our contribution. Section <ref> gives a brief introduction to reinforcement learning preliminaries. In Section <ref>, we present the real-world industrial control task that inspired and discuss the challenges it presents. In Section <ref>, we formulate the real-world problem as a RL one and discuss the design choices that lead to our digital twin. We briefly present 's implementation in Section <ref>. We present and discuss benchmark experiments of baseline methods in Section <ref>, and close with our conclusions in Section <ref>. § RELATED WORK The majority of the existing open-source environments on which novel reinforcement learning algorithms could be tuned can be broadly divided into the following categories: toy control, robotics (MuJoCo) <cit.>, video games (Atari) <cit.>, and autonomous driving. The underlying dynamics of these environments are artificial and may not truly reflect real-world dynamic conditions like high dimensional states and action spaces, and stochastic dynamics. To the best of our knowledge, there exist very few such open-source benchmarks for industrial applications. To accelerate the deployment of agents in the industry, there is a need for a suite of RL benchmarks inspired by real-world industrial control problems, thereby making our benchmark environment, , a valuable addition to it. The classic control environments like mountain car, pendulum, or toy physics control environments based on Box2D are stochastic only in terms of their initial state. They have low dimensional state and action spaces, and they are considered easy to solve with standard methods. Also, the 50 commonly used Atari games in the Arcade Learning Environment <cit.>, where nonlinear control policies need to be learned, are routinely solved to a super-human level by well-established algorithms. This environment, although posing high dimensionality, is deterministic. The real world is not deterministic and there is a need to tune algorithms that can cope with stochasticity. Although techniques like sticky actions or skipping a random number of initial frames have been developed to add artificial randomness, this randomness may still be very structured and not challenging enough. On the other hand, video game simulators like Starcraft II <cit.> offer high-dimensional image observations, partial observability, and (slight) stochasticity. However, playing around with DeepRL agents on such environments requires substantial computational resources. It might very well be overkill to tune reinforcement learning agents in these environments when the real goal is to excel in industrial applications. Advancements in the RL world, in games like Go and Chess, were achieved by exploiting the rules of these games and embedding them into a stochastic planner. In real-world environments, this is seldom achievable, as these systems are highly complex to model in their entirety. Such systems are modeled as partially observable Markov decision processes and present a tough challenge for learning agents that can explore only through interactions. Lastly, some of the more sophisticated RL environments available, e.g., advanced physics simulators like MuJoCo <cit.>, offer licenses with restrictive terms of use. Also, environments like Starcraft II require access to a closed-source binary. Open source licensing in an environment is highly desirable for RL practitioners as it enables them to debug the code, extend the functionality and test new research ideas. Other related works, amongst the very few available open-source reinforcement learning environments for industrial problems, are Real-world RL suite <cit.> and Industrial benchmark (IB) <cit.> environments. Real-world RL-suite is not derived from a real-world scenario, rather the existing toy problems are perturbed to mimic the conditions in a real-world problem. The IB comes close in spirit to our work, although it lacks the customizability of and our expanded tools for agent behavior explainability. Additionally, the (continuous) action and state spaces are relatively low dimensional and of fixed sizes. § REINFORCEMENT LEARNING PRELIMINARIES RL problems are typically studied as discrete-time Markov decision processes (MDPs), where a MDP can be defined as a tuple ⟨𝒮, 𝒜, p, r, γ⟩. At timestep t, the agent is in a state s_t ∈𝒮 and takes an action a_t ∈𝒜. It arrives in a new state s_t + 1 with probability p(s_t + 1| s_t, a_t) and receives a reward r(s_t, a_t, s_t + 1). The state transitions of a MDP satisfy the Markov property p(s_t + 1| s_t, a_t, …, s_0, a_0) = p(s_t + 1| s_t, a_t). That is, the new state s_t + 1 only depends on the current state s_t and action a_t. The goal of the RL agent interacting with the MDP is to find a policy π: 𝒮→𝒜 that maximizes the expected (discounted) cumulative reward. This optimization problem is defined formally as follows: max_π𝔼_τ∼π[∑_t ≥ 0γ^t r(s_t, a_t, s_t + 1)] , where τ = (s_0, a_0, r_0, s_1, …) is a trajectory generated by following π and γ∈ (0, 1] is a discount factor. A trajectory τ ends either when a maximum timestep count T—also called episode[We use the terms “episode” and “rollout” interchangeably in this paper.] length—is reached, or when a terminal state is reached (early termination). § CONTAINER MANAGEMENT ENVIRONMENT In this section, we describe the real-world industrial control task that inspired our RL benchmark. It originates from the final stage of a waste sorting process. The environment consists of a solid material-transforming facility that hosts a set of containers and a significantly smaller set of processing units (PUs). Containers are continuously filled with material, where the material flow rate is a container-dependent stochastic process. They must be emptied regularly so that their content can be transformed by the PUs. When a container is emptied, its content is transported on a conveyor belt to a free PU that transforms it into products. It is not possible to extract material from a container without emptying it completely. The number of produced products depends on the volume of the material being processed. Ideally, this volume should be an integer multiple of the product's size. Otherwise, the surplus volume that cannot be transformed into a product is redirected to the corresponding container again via an energetically costly process that we do not consider in this work. Each container has at least one optimal emptying volume: a global optimum and possibly other, smaller, local optima that are all multiples of container-specific product size. Generally speaking, larger volumes are better, since PUs are more efficient with producing many products in a series. The worst-case scenario is a container overflow. In the waste sorting application inspiring our benchmark, it incurs a high recovery cost including human intervention to stop and restart the facility. This situation is undesirable and should be actively avoided. Therefore, letting containers come close to their capacity limit is rather risky. The quality of an emptying decision is a compromise between the number of potential products and the costs resulting from actuating a PU and handling surplus volume. Therefore, the closer an emptying volume is to an optimum, the better. If a container is emptied too far away from any optimal volume, the costs outweigh the benefit of producing products. This setup can be framed more broadly as a resource allocation problem, where one item of a scarce resource, namely the PUs, needs to be allocated whenever an emptying decision is made. If no PU is available, the container cannot be emptied and will continue to fill up. There are multiple aspects that make this problem challenging: * The rate at which the material arrives at the containers is stochastic. Indeed, although the volumes follow a globally linear trend—as discussed in details in Section <ref>, the measurements can be very noisy. The influx of material is variable, and there is added noise from the sensor readings inside the containers. This makes applying standard planning approaches difficult. * The scarcity of the PUs implies that always waiting for containers to fill up to their ideal emptying volume is risky: if no PU is available at that time, then we risk an overflow. This challenge becomes more prominent when the number of containers—in particular, the ratio between the number of containers and the number of PUs—increases. Therefore, an optimal policy needs to take fill states and fill rates of all containers into account, and possibly empty some containers early. * Emptying decisions can be taken at any time, but in a close-to-optimal policy, they are rather infrequent. Also, the rate at which containers should be emptied varies between containers. Therefore, the distributions of actions are highly asymmetric, with important actions (and corresponding rewards) being rare. § REINFORCEMENT LEARNING PROBLEM FORMULATION We formulate the container management problem presented in Section <ref> as a MDP that can be addressed by RL approaches. The resulting MDP is an accurate representation of the original problem, as only mild simplifications are made. Specifically, we model one category of PUs instead of two, we do not include inactivity periods of the facility in our environment, and we neglect the durations of processes of minor relevance. All parameters described in the rest of this section are estimated from real-world data (system identification). Overall, the MDP reflects the challenges properly without complicating the benchmark (and the code base) with irrelevant details. §.§ State and Action Spaces §.§.§ State space The state s_t of the system at any given time t is defined by the volumes v_i, t of material contained in each container i, and a timer p_j, t for each PU j indicating in how many seconds the PU will be ready to use. A value of zero means that the PU is available, while a positive value means that it is currently busy. Therefore, s_t = ({v_i, t}_i = 1^n, {p_j, t}_j = 1^m), where n and m are the number of containers and PUs, respectively. Valid initial states include non-negative volumes not greater than the maximum container capacity v_max, and non-negative timer values. §.§.§ Action space Possible actions at a given time t are either (i) not to empty any container, i.e. to do nothing, or (ii) to empty a container i and transform its content using one of the PUs. The action of doing nothing is encoded with 0, whereas the action of emptying a container i and transforming its content is encoded with the container's index i. Therefore, a_t ∈{0, 1, …, n}, where n is the number of containers. Since we consider identical PUs, specifying which PU is used in the action encoding is not necessary as an emptying action will result in the same state for all the PUs, given they were at the same previous state. §.§ Environment Dynamics In this section, we define the dynamics of the volume of material in the containers, the PU model, as well as the state update. §.§.§ Volume dynamics The volume of material in each container increases following an irregular trend, growing linearly on average, captured by a random walk model with drift. That is, given the current volume v_i, t contained in i, the volume at time t + 1 is given by the function f_i defined as f_i(v_i, t) = max(0, α_i + v_i, t + ϵ_i, t) , where α_i > 0 is the slope of the linear upward trend followed by the volume for i and the noise ϵ_i, t is sampled from a normal distribution 𝒩(0, σ_i^2) with mean 0 and variance σ_i^2. The max operator forces the volume to non-negative values. When a container i is emptied, its volume drops to 0 at the next timestep. Although the volume drops progressively to 0 in the real facility, empirical evidence provided by our data shows that emptying durations are within the range of the time-step lengths considered in this paper (60 and 120 seconds). §.§.§ Processing unit dynamics The time (in seconds) needed by a PU to transform a volume v of material in a container i is linear in the number of products ⌊ v / b_i ⌋ that can be produced from v. It is given by the function g_i defined in Equation (<ref>), where b_i > 0 is the product size, β_i > 0 is the time it takes to actuate the PU before products can be produced, and λ_i > 0 is the time per product. Note that all the parameters indexed with i are container-dependent. g_i(v) = β_i + λ_i ⌊ v / b_i ⌋ . A PU can only produce one type of product at a time. Therefore, it can be used for emptying a container only when it is free. Therefore, if all PUs are busy, the container trying to use one is not emptied and continues to fill up. §.§.§ State update We distinguish between the following cases to define the new state s_t + 1 = ({v_i, t + 1}_i = 1^n, {p_j, t + 1}_j = 1^m) given the current state s_t and action a_t. a_t = 0. This corresponds to the action of doing nothing. The material volumes inside the containers increase while the timers indicating the availability of the PUs are decreased according to: v_i, t + 1 = f_i(v_i, t), i ∈{1, …, n} , p_j, t + 1 = max(0, p_j, t - ), j ∈{1, …, m} , where f_i is the random walk model defined in Equation (<ref>) and is the length of a timestep in seconds. a_t ≠ 0. This corresponds to an emptying action. If no PU is available, that is, p_j, t > 0 ∀ j = 1, …, m, the updates are identical to the one defined in Equations (<ref>) and (<ref>). If at least one PU k is available, the new state variables are defined as follows: v_a_t, t + 1 = 0 , v_i, t + 1 = f_i(v_i, t), i ∈{1, …, n}\{a_t} , p_k, t + 1 = g_a_t (v_a_t, t) , p_j, t + 1 = max(0, p_j, t - ), j ∈{1, …, m}\{k} , where the value of the action a_t is the index of the container a_t to empty, `\' denotes the set difference operator, f_i is the random walk model defined in Equation (<ref>), and g_a_t is the processing time defined in Equation (<ref>). Although the processes can continue indefinitely, we stop an episode after a maximum length of T timesteps. This is done to make compatible with RL algorithms designed for episodic tasks, and hence to maximize its utility. When a container reaches its maximum volume v_max, however, the episode is terminated and a negative reward is returned (see details in Section <ref>). §.§ Reward Function We use a deterministic reward function r(s_t, a_t, s_t + 1) where higher values correspond to better (s_t, a_t) pairs. The new state s_t + 1 is taken into account to return a large negative reward r_min when a container overflows, i.e. ∃ i ∈{1, …, n}, v_i, t + 1≥ v_max, before ending the episode. In all other cases, the immediate reward is determined only by the current state s_t and the action a_t. a_t = 0. If the action is to do nothing, we define r(s_t, 0, s_t + 1) = 0. a_t ≠ 0. If an emptying action is selected while (i) no PU is available or (ii) the selected container is already empty, i.e. v_a_t, t = 0, then r(s_t, a_t, s_t + 1) =, where it holds < < 0 for the penalty. If, on the other hand, at least one PU is available and the volume to process is non-zero (v_a_t, t > 0), the reward is a finite sum of Gaussian functions centered around optimal emptying volumes v_a_t, i^*, i = 1, …, p_a_t, where the height of a peak 0 < h_a_t, i≤ 1 is proportional to the quality of the corresponding optimum. The reward function in this case is defined as follows: r(s_t, a_t, s_t + 1) = r_pen + ∑_i = 1^p_a_t (h_a_t, i - r_pen) exp(- (v_a_t, t - v_a_t, i^*)^2/2 w_a_t, i^2) , where p_a_t is the number of optima for container a_t and w_a_t, i > 0 is the width of the bell around v_a_t, i^*. The Gaussian reward defined in Equation (<ref>) takes its values in ], 1], the maximum value 1 being achieved at the ideal emptying volume v_a_t, 1^* for which h_a_t, 1 = 1. The coefficients h_a, i are designed so that processing large volumes at a time is beneficial, hence encoding a conflict between emptying containers early and risking overflow. This tension, together with the limited availability of the PUs, yields a highly non-trivial control task. Figure <ref> shows an example of Gaussian rewards when emptying a container with three optimal volumes at different volumes in [0, 40[. § USAGE GUIDE In this section, we introduce the OpenAI Gym implementation[The software is available on the following GitHub repository: <https://github.com/Pendu/ContainerGym>.] of our benchmark environment. We present the customizable parameters of the benchmark, provide an outline for how the Python implementation reflects the theoretical definition in Section <ref>, and show which tools we provide to extend the understanding of an agent's behavior beyond the achieved rewards. Gym implementation. Our environment follows the OpenAI Gym <cit.> framework. It implements a -function computing a state update. The -function resets time to t=0 and returns the environment to a valid initial state, and the -function displays a live diagram of the environment. We deliver functionality to configure the environment's complexity and difficulty through its parameters such as the number of containers and PUs, as well as the composition of the reward function, the overflow penalty , the sub-optimal emptying penalty , the length of a timestep δ, and the length of an episode T. We provide example configurations in our GitHub repository that are close in nature to the real industrial facility. In the spirit of open and reproducible research, we include scripts and model files for reproducing the experiments presented in the next section. Action explainability. While most RL algorithms treat the environment as a black box, facility operators want to “understand” a policy before deploying it for production. To this end, the accumulated reward does not provide sufficient information, since it treats all mistakes uniformly. Practitioners need to understand which types of mistakes a sub-optimal policy makes. For example, a low reward can be obtained by systematically emptying containers too early (local optimum) or by emptying at non-integer multiples of the product size. When a basic emptying strategy for each container is in place, low rewards typically result from too many containers reaching their ideal volume at the same time so that PUs are overloaded. To make the different types of issues of a policy transparent, we plot all individual container volumes over an entire episode. We further provide tools to create empirical cumulative distribution function plots over the volumes at which containers were emptied over multiple episodes. The latter plots in particular provide insights into the target volumes an agent aims to hit, and whether it does so reliably. § PERFORMANCE OF BASELINE METHODS We illustrate the use of by benchmarking three popular deep RL algorithms: two on-policy methods, namely Proximal Policy Optimization (PPO) <cit.> and Trust Region Policy Optimization (TRPO) <cit.>, and one off-policy method, namely Deep Q-Network (DQN) <cit.>. We also compare the performance of these RL approaches against a naive rule-based controller. By doing so, we establish an initial baseline on . We use the Stable Baselines3 implementation of PPO, TRPO, and DQN <cit.>. §.§ Experimental Setup The algorithms are trained on 8 instances, detailed in Table <ref>, where the varied parameters are the number of containers n, the number of PUs m and the timestep length .[Increasing the timestep length should be done carefully. Otherwise, the problem could become trivial. In our case, we choose such that it is smaller than the minimum time it takes a PU to process the volume equivalent to one product.] The rationale behind the chosen configurations is to assess (i) how the algorithms scale with the environment dimensionality, (ii) how action frequency affects the trained policies and (iii) whether the optimal policy can be found in the conceptually trivial case m = n, where there is always a free PU when needed. This is only a control experiment for testing RL performance. In practice, PUs are a scarce resource (m ≪ n). The maximum episode length T is set to 1500 timesteps during training in all experiments, whereas the initial volumes v_i, 0 are uniformly sampled in [0, 30], and the maximum capacity is set to v_max = 40 volume units for all containers. The initial PU states are set to free (p_j, 0 = 0) and the minimum and penalty rewards are set to = -1 and = -0.1 respectively. The algorithms are trained with an equal budget of 2 (resp. 5) million timesteps when n = 5 (resp. n = 11) and the number of steps used for each policy update for PPO and TRPO is 6144. Default values are used for the remaining hyperparameters, as the aim of our work is to show characteristics of the training algorithms with a reasonable set of parameters. For each algorithm, 15 policies are trained in parallel, each with a different seed ∈{1, …, 15}, and the policy with the best training cumulative reward is returned for each run. To make comparison easier, policies are evaluated on a similar test environment with = 120 and T = 600. §.§ Results and Discussion Table <ref> shows the average test cumulative reward, along with its standard deviation, for the best and median policies trained with PPO, TRPO, and DQN on each environment configuration. These statistics are calculated by running each policy 15 times on the corresponding test environment. PPO achieves the highest cumulative reward on environments with 5 containers. DQN, on the other hand, shows the best performance on higher dimensionality environments (11 containers), with the exception of the last configuration. It has, however, a particularly high variance. Its median performance is also significantly lower than the best one, suggesting less stability than PPO. DQN's particularly high rewards when n = 11 could be due to a better exploration of the search space. An exception is the last configuration, where DQN's performance is significantly below those of TRPO and PPO. Overall, PPO has smaller standard deviations and a smaller difference between best and median performances, suggesting higher stability than TRPO and DQN. Our results also show that taking actions at a lower frequency (= 120) leads to better policies. Due to space limitations, we focus on analyzing this effect on configurations with n = 5 and m = 2. Figure <ref> displays the volumes, actions, and rewards over one episode for PPO and DQN when the (best) policy is trained with = 60 (left column) and with = 120. We observe that with = 60, PPO tends to empty containers C1-60 and C1-70 prematurely, which leads to poor rewards. These two containers have the slowest fill rate. Increasing the timestep length, however, alleviates this defect. The opposite effect is observed on C1-60 with DQN, whereas no significant container-specific behavior change is observed for TRPO, as evidenced by the cumulative rewards. To further explain these results, we investigate the empirical cumulative distribution functions (ECDFs) of the emptying volumes per container. Figure <ref> reveals that more than 90% of PPO's emptying volumes on C1-60 (resp. C1-70) are approximately within a 3 (resp. 5) volume units distance of the third best optimum when = 60. By increasing the timestep length, the emptying volumes become more centered around the global optimum. While no such clear pattern is observed with DQN on containers with slow fill rates, the emptying volumes on C1-60 move away from the global optimum when the timestep length is increased (more than 50% of the volumes are within a 3 volume units distance of the second best optimum). DQN's performance increases when = 120. This is explained by the better emptying volumes achieved on C1-20, C1-70, and C1-80. None of the benchmarked algorithms manage to learn the optimal policy when there are as many PUs as containers, independently of the environment dimensionality and the timestep length. When m = n, the optimal policy is known and consists in emptying each container when the corresponding global optimal volume is reached, as there is always at least one free PU. Therefore, achieving the maximum reward at each emptying action is theoretically possible. Figure <ref> shows the ECDF of the reward per emptying action over 15 episodes of the best policy for each of PPO, TRPO and DQN when m = n = 11. The rewards range from = -1 to 1, and the ratio of negative rewards is particularly high for PPO. An analysis of the ECDFs of its emptying volumes (not shown) reveals that this is due to containers with slow fill rates being emptied prematurely. TRPO achieves the least amount of negative rewards whereas all DQN's rollouts end prematurely due to a container overflowing. Rewards close to are explained by poor emptying volumes, as well as a bad allocation of the PUs (r =). These findings suggest that, when m = n, it is not trivial for the tested RL algorithms to break down the problem into smaller, independent problems, where one container is emptied using one PU when the ideal volume is reached. The limitations of these RL algorithms are further highlighted when compared to a naive rule-based controller that empties the first container whose volume is less than 1 volume unit away from the ideal emptying volume. Figure <ref> shows the ECDF of reward per emptying action obtained from 15 rollouts of the rule-based controller on three environment configurations, namely n = 5 and m = 2, n = 11 and m = 2, and n = 11 and m = 11. When compared to PPO in particular (m = n = 11), the rule-based controller empties containers less often (17.40% of the actions vs. more than 30% for PPO). Positive rewards are close to optimal (approx. 90% in [0.75, 1]), whereas very few negative rewards are observed. These stem from emptying actions taken when no PU is available. These findings suggest that learning to wait for long periods before performing an important action may be challenging for some RL algorithms. Critically, current baseline algorithms only learn reasonable behavior for containers operated in isolation. In the usual m ≪ n case, none of the policies anticipates all PUs being busy when emptying containers at their optimal volume, which they should ideally foresee and prevent proactively by emptying some of the containers earlier. Hence, there is considerable space for future improvements by learning stochastic long-term dependencies. This is apparently a difficult task for state-of-the-art RL algorithms. We anticipate that can contribute to research on next-generation RL methods addressing this challenge. § CONCLUSION We have presented , a real-world industrial RL environment. Its dynamics are quite basic, and therefore easily accessible. It is easily scalable in complexity and difficulty. Its characteristics are quite different from many standard RL benchmark problems. At its core, it is a resource allocation problem. It features stochasticity, learning agents can get trapped in local optima, and in a good policy, the most important actions occur only rarely. Furthermore, implementing optimal behavior requires planning ahead under uncertainty. The most important property of is to be of direct industrial relevance. At the same time, it is a difficult problem with the potential to trigger novel developments in the future. While surely being less challenging than playing the games of Go or Starcraft II at a super-human level, is still sufficiently hard to make state-of-the-art baseline algorithms perform poorly. We are looking forward to improvements in the future. To fulfill the common wish of industrial stakeholders for explainable ML solutions, we provide insights into agent behavior and deviations from optimal behavior that go beyond learning curves. While accumulated reward hides the details, our environment provides insights into different types of failures and hence gives guidance for routes to further improvement. Acknowledgements. This work was funded by the German federal ministry of economic affairs and climate action through the “ecoKI” grant. splncs04
http://arxiv.org/abs/2307.02417v2
20230705164023
3D Multi-Robot Exploration with a Two-Level Coordination Strategy and Prioritization
[ "Luigi Freda", "Tiago Novo", "David Portugal", "Rui P. Rocha" ]
cs.RO
[ "cs.RO" ]
Geometric control of tilt transition dynamics in single-clamped thermalized elastic sheets Mark J. Bowick Received ; accepted =========================================================================================== empty empty This work presents a 3D multi-robot exploration framework for a team of UGVs moving on uneven terrains. The framework was designed by casting the two-level coordination strategy presented in <cit.> into the context of multi-robot exploration. The resulting distributed exploration technique minimizes and explicitly manages the occurrence of conflicts and interferences in the robot team. Each robot selects where to scan next by using a receding horizon next-best-view approach <cit.>. A sampling-based tree is directly expanded on segmented traversable regions of the terrain 3D map to generate the candidate next viewpoints. During the exploration, users can assign locations with higher priorities on-demand to steer the robot exploration toward areas of interest. The proposed framework can be also used to perform coverage tasks in the case a map of the environment is a priori provided as input. An open-source implementation is available online. § INTRODUCTION In the near future, fleets of autonomous robots will be able to flexibly and cooperatively perform multiple tasks such as exploration, coverage and patrolling <cit.>. Amongst these tasks, an exploration mission is crucial for first assessments and to preliminarily build a model of an unknown environment. This operation typically requires a higher level of autonomy and robustness, especially with UGVs or UAVs operating in complex 3D environments <cit.>. Specifically, the goal of a team of exploring robots is to cooperatively cover an unknown environment with sensory perceptions <cit.>. Typically, the expected output of exploration is a 3D map of the environment <cit.> or the discovery of interesting information/objects <cit.>. Indeed, multi-robot exploration has a wide range of potential applications, spanning from search-and-rescue operations <cit.> to surveillance <cit.>, mapping <cit.> and planetary missions. In general, a multi-robot system presents many advantages over a single robot system <cit.>: completion time is reduced, information sharing and redundancy can be used to increase the map accuracy and localization quality <cit.>. Nonetheless, taking advantage of a multi-robot architecture and actually attaining performance improvements requires the design of sophisticated strategies which can guarantee cooperation (avoid useless actions) and coordination (avoid conflicts). In the realm of UGVs, these strategies are particularly crucial. Indeed, narrow passages (due for example to collapsed infrastructures or debris <cit.>) typically generate spatial conflicts amongst teammates. A suitable strategy is required to (i) minimize interferences and (ii) recognize and resolve possible incoming deadlocks. Neglecting the occurrence of such events can hinder UGVs activities or even provoke major failures. A team of exploring UGVs has to face many challenges. For instance, the UGVs are required to navigate over a 3D uneven and complex terrain. Moreover, the environment may be dynamic and large-scale <cit.>. In this case, UGVs must continuously and suitably update their internal representations of the surrounding scenario to robustly localize and to best adapt their behaviors to environment changes. Last but not least, in order to collaborate, UGVs must continuously exchange coordination messages and share their knowledge over an unreliable network infrastructure. The design of a robust communication protocol is crucial. In this paper, we address the aforementioned challenges. We present a multi-robot exploration framework designed by casting the two-level coordination strategy presented in <cit.> into a 3D exploration context. The resulting distributed technique minimizes and explicitly manages the occurrence of conflicts and interferences in the team. Each robot selects where to scan next by using a receding horizon next-best-view approach <cit.>. Here, a sampling-based tree is directly expanded on segmented traversable regions of the terrain 3D map (see Fig. <ref>). During the exploration, users can assign locations with higher priorities on-demand to steer the exploration toward areas of interest. The proposed framework can be also used to perform coverage tasks in the case a 3D map of the environment is provided a priori as input. We evaluate our system through both simulation analyses and real-world experiments. An open-source implementation is available at https://github.com/luigifreda/3dmrgithub.com/luigifreda/3dmr. The remainder of this paper is organized as follows. First, Section <ref> presents and discusses related works. Then, Section <ref> describes the exploration problem. Next, an overview of our approach is presented in Section <ref>. Our two-level exploration strategy is then described in Sections <ref> and <ref>. Subsequently, Section <ref> explains how our framework can be used for coverage tasks. Results are presented in Section <ref>. Finally, Section <ref> presents our conclusions. § RELATED WORKS In the literature, the problem of covering a scene with sensory perceptions comes in many flavors. Essentially, a sequence of viewpoints must be planned in order to gather sensory perceptions over a scene of interest. Three main specializations can be considered for this viewpoint planning problem: next-best-view, exploration and coverage. §.§ Next-best-view If the scene consists of a single-object without obstacles, next-best-view planning algorithms are in order <cit.>. In this case, the goal is to obtain an accurate 3D reconstruction of an arbitrary object of interest. These approaches typically sample candidate view configurations within a sphere around the target object and select the view configuration with the highest information gain. The downside of these strategies is they do not scale well in multi-objects scenes. In this work, the proposed exploration framework uses the same philosophy of next-best-view approaches. First, candidate views are generated. In our case, we locally expand a sampling-based tree of candidate views over the traversable terrain surrounding the robot. Then, the next best view is selected. As in <cit.>, we first identify the branch b^* of the expanded tree which maximizes the total information gain (as in a receding-horizon scheme) and then select as next view the nearby node along b^* (as in a motion-predictive approach). §.§ Exploration When the scene is an unknown environment, viewpoint planning is referred to as exploration. The vast majority of exploration strategies are frontier-based <cit.>. Here, the frontier is defined as the boundary between known and unknown territory and is approached in order to maximize the expected information gain. In most strategies, a team of robots cooperatively build a metric representation of the environment, where frontier segments are extracted and used as prospective viewpoints. In <cit.>, Burgard et al. presented a frontier-based decentralized strategy where robots negotiate targets by optimizing a mixed utility function which takes into account the expected information gain, the cost-to-go and the number of negotiating robots. In <cit.>, the same decentralized frontier-based approach extended to a large-scale heterogeneous team of mobile robots. An incremental deployment algorithm is presented in <cit.>, where robots approach the frontier while retaining visual contact with each other. In <cit.>, a sensor-based random graph is expanded by the robots by using a randomized local planner that automatically performs a trade-off between information gain and navigation cost. A interesting class of exploration methods fall under the hat of active SLAM <cit.> (or integrated exploration <cit.>), which considers the correlation between viewpoint selection and the onboard localization/mapping quality. For instance, in <cit.>, the utility of a particular frontier region is also considered from the viewpoint of relative robot localization. Similarly, belief-space planning was proposed to integrate the expected robot belief into its motion planning <cit.>. An interesting multi-robot architecture is presented in <cit.>, where robots are guided through the exploration by a market economy. Similarly, in <cit.>, a centralized frontier-based approach is proposed in which a bidding protocol is used to assign frontier targets to the robots. In <cit.>, an exploration strategy is driven by the resolution of a partial differential equation. A similar concept is presented in <cit.>. Here, in order to solve a stochastic differential equation, Shen et al. use particles as an efficient sparse representation of the free space and for identifying frontiers. Biological-inspired strategies based on Particle Swarm Optimization (PSO) are presented in <cit.>, where an exploration task is defined through the distributed optimization of a suitable sensing function. In <cit.>, an efficient exploration strategy exploits background information (i.e. a topo-metric map) of the environment in order to improve time performances. §.§ Coverage Finally, when a prior 3D model of the scene is assigned, the viewpoints planning problem is known as coverage <cit.>. In <cit.>, a special issue collected new research frontiers and advancements on multi-robot coverage along with multi-robot search and exploration. In <cit.>, Agmon et al. proposed a strategy for efficiently building a coverage spanning tree for both online and offline coverage, aiming to minimize the time to complete coverage. In <cit.>, an efficient frontier-based approach was proposed to solve at the same time the problems of exploration (maximize the size of the reconstructed model) and coverage (observe the entire surface of the environment, maximize the model completeness). In <cit.>, coverage strategies were presented for inspecting complex structures on the ocean floor by using autonomous underwater vehicles. § PROBLEM SETUP A team of m ≥ 2 ground exploration robots are deployed in an unknown environment and are constrained to move on an uneven terrain. The team has to perform an exploration mission, i.e. cooperatively cover the largest possible part of the environment with sensory perceptions <cit.>. The output of the exploration process is a 3D model of the environment. §.§ Notation In the remainder of this paper, we will use the following convention: Given a symbol a^k_j, unless otherwise specified, the superscript k denotes a time index while the subscript j refers to robot j. For instance, q_j^k is the configuration of robot j at exploration step k. A list of the main symbols introduced hereafter is reported in Table <ref>. §.§ 3D Environment, Terrain and Robot Configuration Space Let T=[t_0,∞) ⊂ℝ denote a time interval, where t_0 ∈ℝ is the starting time. The 3D environment W is a compact connected region in ℝ^3. The obstacle region is denoted by O⊂ℝ^3. For ease of presentation, we assume O to be static for the moment[In our experiments, we relaxed this assumption by considering an environment populated by low-dynamic objects <cit.>. This is possible by representing the environment with a suitable dynamic model.]. The robots move on a 3D terrain, which is identified as a compact and connected manifold S in W. The free environment is W_ free= W∖( O∪ S). The configuration space C of each robot is the special Euclidean group SE(3) <cit.>. In particular, a robot configuration q = [p,ϕ]^T ∈ C consists of a 3D position p of the robot representative centre and a 3D orientation ϕ. A_j(q) ⊂ℝ^3 denotes the compact region occupied by robot j at q∈ C. A robot configuration q∈ C is considered valid if the robot at q is safely placed over the 3D terrain S. This requires q to satisfy some validity constraints defined according to <cit.>. A robot path is a continuous function τ: [0,1] → C. A path τ is safe for robot j if for each s ∈ [0,1]: A_j(τ(s)) ∩ O = ∅ and τ(s) ∈ C is a valid configuration. We assume each robot in the team is path controllable, that is each robot can follow any assigned safe path in C with arbitrary accuracy <cit.>. §.§ Sensor Model Each robot of the team is equipped with a 3D laser range-finder rigidly attached to its body. Teammates are able to localize in a common global map frame (see <cit.>). We formalize the laser sensor operation as follows. Assuming that robot j is at q, denote by F_j(q) ⊂ℝ ^N the compact region occupied by its sensor field of view, which is star-shaped with respect to its sensor center s_j(q)∈ W. In ℝ ^2, for instance, F_j(q) can be a circular sector with apex s_j(q), opening angle α_s and radius R_s, where the latter is the perception range (see Fig. <ref>, left). With robot j at q, a point p∈ W is said to be visible from the sensor if p∈ F_j(q) and the open line segment joining p and s_j(q) does not intersect . At each configuration q, the robot sensory system returns a 3D scan with the following information (see Fig. <ref>, right): * the visible free region (or view) V_j(q), i.e. all the points of W_ free that are visible from the sensor of robot j; * the visible obstacle boundary B_j(q)= ( ∂ O∪ S) ∩∂ V_j(q), i.e., all points of ∂ O∪ S that are visible from the sensor. The above sensor is an idealization of a `continuous' range finder. In our case, it is a 3D laser range finder, which returns the distance to the nearest obstacle point along the directions (rays) contained in its field of view (with a certain resolution). Indeed, other sensory systems, such a stereoscopic camera, satisfy the above description. §.§ Exploration task Robot j explores the world through a sequence of view-plan-move actions. In order to simplify the notation, we assume the robots synchronize their actions, i.e. a step k identifies a common time frame t_k ∈ T for all the robots. The following presentation can be easily adapted to the general case. Each configuration where a view is acquired is called a view configuration. Let q_j^0 be the initial configuration of robot j. Denote by q_j^1,q_j^2,...,q_j^k the sequence of view configurations planned by robot j up to the k-th exploration step. When the exploration starts, all the initial robot j endogenous knowledge can be expressed as: E_j^0 = A_j(q_j^0) ∪ V_j(q_j^0), where A_j(q_j^0) represents the free volume[Often, A_j(q_j^0) in (<ref>) is replaced by a larger free volume A_j whose knowledge comes from an external source. This may be essential for planning safe motions in the early stages of an exploration.] that robot j body occupies (computed on the basis of proprioceptive sensors) and V_j(q_j^0) is the view at q_j^0 (provided by the exteroceptive sensors). At step k ≥ 1, the explored region of robot j is: E_j^k = E_j^k-1∪ V_j(q_j^k). At each step k, E_j^k ⊆ W_ free is the current estimate of the free world that robot j has built. Since safe planning requires A_j(q^k) ⊂ E_j^k-1 for any k, we have: E_j^k = A_j(q_j^0) ∪( ⋃_i = 0^k V_j(q_j^i) ). If we consider the whole robot team, the overall explored region at step k is: E^k = ⋃_j = 1^m E_j^k = ⋃_j = 1^m A_j(q_j^0) ∪( ⋃_j = 1^m ⋃_i = 0^k V_j(q_j^i) ). In the above equations, the union operator ∪ denotes a data-integration operation which depends on the adopted spatial model (see Sect. <ref>). A point p∈ W_ free is defined explored at step k if it is contained in E^k and unexplored otherwise. A configuration q is exploration-safe at step k if A_j(q) ⊂ cl( E^k), where cl(·) indicates the set closure operation (configurations that bring the robot in contact with obstacles are allowed). A path τ in C is exploration-safe for robot j at step k if for each s ∈ [0,1] the configuration τ(s) ∈ C is exploration-safe. The exploration-safe region R_j^k of robot j collects all the configurations that are reachable from q_j^0 through an exploration-safe path at step k. Indeed, R_j^k represents a configuration space image of E^k for robot j. The view plan π_j of robot j is a finite sequence of view configuration π_j = {q_j^k}_k=0^l_j, where l_j is the length of π_j. An exploration plan is a view plan such that q_j^k ∈ R_j^k at each step k. A team exploration strategy Π = {π_1,...,π_m} collects the exploration plans of the robots in the team. Here, l = max(l_1,...,l_m) is the length of the strategy Π. Exploration objective. The robots must cooperatively plan an exploration strategy Π of minimum length l such that E^l is maximized. In particular, E^l is maximized at step l if there is no robot j that can plan a new view configuration q∈ R_j^l such that E^l ⊂ ( E^l ∪ V_j(q)). Other factors, such as the resulting map “accuracy" could be taken into account in the exploration objective. We further develop this in the next subsection. §.§ Exploration Methods Assume that each robot can associate an information gain I(q,k) to any (safe) q at step k. This is an estimate of the world information which can be discovered at the current step by acquiring a view from q. Consider the k-th exploration step, which starts with the robot j at q_j^k. Let Q_j^k ⊂ R_j^k be the of robot j, i.e. the set of configurations which (i) have non-zero information gain, and (ii) can be reached[The reachability requirement accounts for possible kinematic constraints to which robot may be subject to.] from q_j^k through a path that is exploration-safe at step k. A general exploration method (Fig. <ref>) searches for the next view configuration in Q_j^k ∩ D(q_j^k,k), where D(q_j^k,k) ⊆ C is a compact admissible set around q_j^k at step k, whose size determines the scope of the search. For example, if D(q_j^k,k) = C, a global search is performed, whereas the search is local if D(q_j^k,k) is a neighborhood of q_j^k. If Q_j^k ∩ D(q_j^k,k) is not empty, q_j^k+1 is selected in Q_j^k ∩ D(q_j^k,k) according to some criterion (e.g., information gain maximization). The robot then moves to q_j^k+1 to acquire a new view (forwarding). Otherwise, the robot returns to a previously visited q_j^b (b<k) such that Q_j^k ∩ D(q_j^b,k) is not empty (backtracking). The exploration can be considered completed at step k if Q_j^k=∅, i.e., no informative configuration can be safely reached. To specify an exploration method, one must define: 0em * an information gain; * a forwarding selection strategy; * an admissible set D(q_j^k,k); * a backtracking selection strategy. § SYSTEM OVERVIEW In this section, we introduce our approach. Fig. <ref> presents the main components of an exploration robot. This interacts with its environment through observations and actions, where an observation consists of a vector of sensor measurements and an action corresponds to a robot actuator command. Team messages are exchanged with teammates over a network for sharing knowledge and decisions in order to attain team collaboration. Decision-making is achieved by the exploration agent and the path planner, basing on the available information stored in the environment model and the team model. In particular, the environment model consists of a topological map, aka exploration tree, and two different metric maps: the volumetric map and the point cloud map. The team model represents the robot belief about the current plans of teammates (goals and planned paths). The main components of the exploration robots are introduced in the following subsections. Table <ref> reports their symbol along with a short description. §.§ Exploration Tree and Exploration Agent During the exploration process (see Sect. <ref>), an exploration tree K_j is built by robot j. A node of K_j is referred to as view node and represents a view configuration. An edge between two view nodes corresponds to a safe path joining them. K_j is rooted at q_j^0. At each forwarding step, a new node corresponding to q_j^k+1 and a new edge representing a path from q_j^k to q_j^k+1 are added in K_j. We consider an exploration tree as a topological map representation of the explored environment. Each node of the tree represents an explored local region contained within a ball of radius R_s centred at the associated view configuration. The exploration agent is responsible of online generating an exploration plan (cfr. Sect. <ref>) according to the defined exploration objective (cfr. Sect. <ref>). §.§ Point Cloud Map and Path-Planning Robot j incrementally builds a 3D point cloud map M_j as a metric representation of the detected environment surfaces ⋃_j = 1^m ⋃_i = 0^k B_j(q_j^i). The points of map M_j are segmented and partitioned into two main sets: obstacles points M^obs_j and traversable points M^trav_j (which can be “back-projected" to a valid and safe robot configuration <cit.>). Each point of M_j is associated to a multi-robot traversability cost function trav:ℝ^3 →ℝ <cit.>. In this regard, M_j and the traversability cost are used to associate a navigation cost J(τ) to each safe path τ <cit.>. Given the current robot position p_r ∈ℝ^3 and a goal position p_g ∈ S, the path planner computes the safe path τ* which minimizes the navigation cost J(τ) and connect p_r with p_g. The path planner reports a failure if a safe path connecting p_r with p_g is not found. See <cit.> for further details. In this work, for simplicity, in order to reach a desired view configuration q = [p,ϕ]^T ∈ C, a robot first plans and follows a path towards p, then it turns on itself on S so as to assume the closest orientation to ϕ. §.§ Volumetric Map and Information Gain A volumetric map H_j is incrementally built by robot j to represent the explored region E^k and associate an information gain to each safe configuration. Specifically, H_j is a probabilistic occupancy gridmap, stored in the form of an Octomap <cit.>. This partitions the environment in free, occupied and unknown cells with a pre-fixed resolution. Such a model allows to appropriately model an environment populated by low-dynamic objects <cit.>. The information gain is defined and computed as follows. At step k, the boundary of the explored region ∂ E^k is the union of two disjoint sets: * the obstacle boundary ∂ E^k_ obs, i.e., the part of ∂ E which consists of detected obstacle surfaces; * the free boundary ∂ E^k_ free, i.e., the complement of ∂ E^k_ obs, which leads to potentially explorable areas. One has ∂ E^k_ obs = ⋃_i = 0^k B(q^i) and ∂ E^k_ free=∂ E^k ∖∂ E^k_ obs. Let V(q,k) be the simulated view, i.e., the view which would be acquired from q if the obstacle boundary were ∂ E^k_ obs. The information gain I(q,k) is a measure of the set of unexplored points lying in V(q ,k). We compute I(q,k) as the total volume of the unknown cells of H_j lying in V(q ,k), following the approaches in <cit.>. §.§ Network Model Each robot can broadcast messages in order to share knowledge, decisions and achievements with teammates. We assume each of these messages can be lost during its transmission with a given probability P_c. Different types of broadcast messages are used by the robots to convey various information and attain coordination. In this process, the identification number (ID) of the emitting robot is included in the heading of any broadcast message. In particular, a coordination broadcast message is emitted by a robot in order to inform teammates when it reaches a goal node (reached), planned a goal node (planned), selected a goal node (selected), abort a goal node (aborted), acquires new laser data (scan). Additionally, a message tree is broadcast in order to enforce the synchronization of the volumetric maps and point cloud maps amongst teammates (see Sect. <ref>). Table <ref> summarizes the used broadcast messages along with the conveyed information/data. The general message format is ⟨ robot_id, message_type, data ⟩. §.§ Shared Knowledge Representation and Update Each robot of the team stores and updates its individual representation of the world state. In particular, robot j incrementally builds its individual point cloud map M_j by using the acquired scans (see <cit.>). This allows the path planner to safely take into account explored terrain extensions and possible environment changes. At the same time, robot j updates its volumetric map H_j by integrating its acquired 3D scans and the scan messages received from teammates. As mentioned above, some of the scan messages can be lost and H_j may only partially represent the actual explored region E^k. In order to mitigate this problem, each robot continuously broadcasts a tree message at a fixed frequency 1/T_M. In particular, a recipient robot h uses a tree message from robot j to estimate if the map H_h sufficiently overlaps with H_j or it missed the integration of some scan message data. A pseudocode description of this procedure is reported in Algorithm <ref>: this verifies if the Euclidean projections of K_j and K_h sufficiently overlap each other. If a sufficient overlap is not verified, a synchronization and merge procedure is triggered between the maps of the two robots i and j by requesting the missing scans (line 4, Algorithm <ref>). The above information sharing mechanism allows to implement a shared information gain, i.e. each robot computes the information gain on the basis of a distributed shared knowledge. This enforces team cooperation by minimizing unnecessary actions such as exploring regions already visited by teammates. §.§ Team Model In order to cooperate with its teammates and manage conflicts, robot j maintains an internal representation of the planning state of the team by using a dedicated table: T_j = ⟨(p_1,g_1,τ_1,c_1,t_1),...,(p_m,g_m,τ_m,c_m,t_m)⟩. This stores for each robot h: its current position p_h ∈ℝ^3, its selected goal position g_h ∈ S, the last computed safe path τ_h to g_h, the associated travel cost-to-go c_h ∈ℝ^+ (i.e. the length of τ_h), and the timestamp t_h ∈ T of the last message used to update any portion of (p_h,g_h, τ_h,c_h). The table T_j is updated by using reached, planned, selected, aborted and position messages. In particular, reached, planned and aborted messages received from robot h are used to reset the tuple (g_h,τ_h,c_h) to empty (i.e. no information available). A selected message sets the tuple (g_h,τ_h,c_h,t_h). Furthermore, a position message from robot h is used to update its position estimate in T_j. An expiration time δ t_e is used to clean T_j of old and invalid information. In fact, part of the information stored in T_j may refer to robots that underwent critical failures or whose connections have been down for a while. A tuple (p_h,g_h, c_h,τ_h,t_h) is reset to zero if (t - t_h) > δ t_e, where t ∈ T denotes the current time. § TWO-LEVEL COORDINATION STRATEGY This section first introduces the notions of topological and metric conflicts and then presents our two-level coordination strategy (see Sect. <ref>). In this context, coordination (avoid conflicts) and cooperation (avoid inefficient actions) are crucial team objectives. §.§ Topological and Metric conflicts A topological conflict between two robots i and j is defined with respect to their exploration trees K_i and K_j. Specifically, a node conflict occurs when robot i and j attempt to add a new view node in the same spatial region at the same time, respectively n_i in K_i and n_j in K_j. We identify this situation when the corresponding positions p(n^i_g) ∈ℝ^3 and p(n^j_g) ∈ℝ^3 are closer than a certain distance D_g ≤ R_s. On the other hand, metric conflicts are directly defined in the 3D Euclidean space where two robots are referred to be in interference if their centres are closer than a pre-fixed safety distance D_s. It must hold D_s ≥ 2 R_b, where R_b is the common bounding radius of robots, i.e. the radius of their minimal bounding sphere. A metric conflict occurs between two robots if they are in interference or if their planned paths may bring them in interference[That is, the distance between the closest pair of points of the two planned paths is smaller than D_s. ]. It is worth noting that topological conflicts may not correspond to metric conflicts. In our framework, a node may represent a large region that could be visited by two or more robots at the same time. §.§ Coordination Strategy Our exploration strategy is distributed and supported on two levels: topological and metric (see Figure <ref>). The exploration agent acts on the topological strategy level by suitably planning and adding a new view node n_g (corresponding to a view configuration) on its exploration tree. Cooperation is attained by planning n_g on the basis of the shared information gain (see Sect. <ref>). The path planner acts on the metric strategy level (see Figure <ref>) by computing the best safe path from the current robot position to the goal position p(n_g) using its individual traversable map (see <cit.>). The exploration agent guarantees topological coordination by continuously monitoring and negotiating possibly incoming node conflicts (see Sect. <ref>). In case two or more robots plan view configurations close to each other (node conflict), the robot with the smaller path length actually goes, while the other robots stop and re-plan a new view node. The path planner guarantees metric coordination by applying a multi-robot traversability function. This induces prioritized path planning, in which robots negotiate metric conflicts by preventing their planned paths from intersecting (see <cit.>). The continuous interaction between the exploration agent and the path planner plays a crucial role. The exploration agent assigns desired goals to the path planner. The latter in turn continuously replies with a feedback informing the former about its state and computations. In particular, when the robot moves towards p(n_g), the path planner continuously re-plans the best traversable path until the robot reaches the assigned goal. During this process, if a safe path is not found, the path planner stops the robot, informs the exploration agent of a path planning failure, and then the exploration agent re-plans a new view node. On the other hand, every time the path planner computes a new safe path, its length is used by the exploration agent to resolve possible node conflicts. In our view, the two-way strategy approach allows to reduce interferences and manage possible deadlocks. In fact, while the exploration agent focuses on the most important exploration aspects (shared information gain maximization and node conflicts resolution), the path planner takes care of possible incoming metric conflicts. Moreover, where the path planner strategy may fail alone in arbitrating challenging conflicts, the exploration agent intervenes and reassigns tasks to better redistribute robots over the environment. These combined strategies minimize interferences by explicitly controlling node conflicts and planning on the multi-robot traversable map. § THE EXPLORATION AGENT In this section, we present in detail the exploration agent algorithm. A pseudocode description is reported in Algorithm <ref>. Some comments accompany the pseudocode with the aim of rendering it self-explanatory. An exploration agent instance runs on each robot. It takes as input the robot ID j. A main loop supports the exploration algorithm (lines 5–26). First, all the main data structures along with the boolean variables[We use an “is_" prefix to denote boolean variables.] are updated (line 6). This update takes into account all the information received from teammates and recast the distributed knowledge. When the current goal is reached (line 7), it is added as a new node in K_j. A new edge is created from the current node q^k to the new node q^k+1. Next, a new node is planned and sent to the path planner as the next goal position (line 13). During this steps, the robot informs the team about its operations by using broadcast messages (line 10 and 12). On the other hand, if the robot is still reaching the current goal node, lines 15–24 are executed. If either a path planner failure or a node conflict (see Section <ref>) occurs (line 15) then the exploration agent sends a goal abort to the path planner and broadcasts its decision (lines 16–17). This triggers a new node selection (line 18). Otherwise (line 22), a selected message is emitted for sake of robustness (aiming at maximizing the probability the message is actually received by all teammates). Then, a sleep for a pre-fixed time T_sleep allows the robot to continue its travel towards the planned goal (line 23). It is worth noting that the condition at line 15 of Algorithm <ref> implements continuous monitoring and allows robots to modify their plan while moving. §.§ Data Update The Update() function is summarized in Algorithm <ref>. This is in charge of refreshing the robot data structures presented in Sect. <ref>. Indeed, these structures are asynchronously updated by callbacks which are independently triggered by new teammates broadcast messages or path planner feedback messages. Lines 1–3 of Algorithm <ref> represent the asynchronous updates of the volumetric map H_j, the point cloud map M_j and of the team model T_j. The remaining lines describe how the reported boolean variables are updated depending on the information stored in the team model and received from path planner feedback. §.§ Node Conflict Management The concept of topological node conflict was defined in Section <ref>. During an exploration process, this occurs when (i) two robots select their goals at a distance closer than a pre-fixed value D_g or (ii) when a robot selects its goal at distance closer than D_g from a teammate position. The first case is referred to as goal-goal conflict, while the latter is called goal-start conflict. A robot checks for node conflicts by solely using the information stored in its individual team model. Robot j can detect a node conflict with robot i only if, in its team model T_j, the tuple (p_i,g_i, c_i,τ_i,t_i) is valid (see Sect. <ref>). In particular, robot j detects a goal-goal conflict with robot i if the following conditions are verified: 0em * a path connecting the goals g_j with g_i exists on the local traversability map of robot j, and its length is smaller than D_g. * c_j is higher than c_i in T_j, or j>i in the unlikely case the navigation costs are equal (robot priority by ID as fallback). This entails that the priority is given to the robot with the smallest navigation cost. Robot j detects a goal-start conflict with robot i if a path connecting the goal g_j with the position p_i exists on the local traversability map of robot j, and its length is smaller than D_g. When one of the two above conflict cases occurs, the boolean variable is_node_conflict is set to true by robot j (line 3 of Alg. <ref>). As a consequence, its current goal is aborted and a new node is planned (lines 16–20 of Alg. <ref>). §.§ Next Best View Selection Strategy This section describes the PlanNextBestView() function, whose pseudocode is reported in Algorithm <ref>. This function is responsible of generating an admissible set D and selecting the next best view configuration in D (forwarding strategy, cfr. Sect. <ref>). In order to achieve efficiency and a quick response time, a windowed search strategy is adopted. In particular, in Algorithm <ref>, the main loop (lines 4–12) guides the search of the next best view through an increasing sequence of bounding boxes {box_i}_i=1^l, where box_i ⊂box_i+1 (lines 4–9). At each iteration (line 4, Algorithm <ref>), an admissible set D is created in the current bounding box (line 5), in the form of a tree by using the BuildSearchTree() function. In the following, we refer to D as the search tree. A utility U(q) (see Sect. <ref>) and an information gain I(q) (see Sect. <ref>) are associated to each candidate configuration q in D. The next best view configuration q^* is computed as the one maximizing the utility in D (line 7). If the utility U(q^*) is greater than a minimum threshold U_min, then q^* is actually used for a forwading step (line 8–9). On the other hand, if a valid next view configuration is not found (line 13) then the robot performs a backtracking step (line 14, Algorithm <ref>, Sect. <ref>). The selection of the next view node is performed at line 9 of Algorithm <ref>) along the best branch of the search tree (see next section). §.§ Search Tree Construction The construction of the admissible set D (line 5, Algorithm <ref>) is performed by the BuildSearchTree() function, whose pseudocode is introduced in Algorithm <ref>. Each node n_h of the search tree D represents a candidate view configuration q_h, while an edge represents a safe path τ in C between the two joined nodes (see Fig. <ref>). The function BuildSearchTree() expands a sampling-based tree D on the set of traversable points M^trav_j ∩box. The root of D is initialized at the current_node. The tree is grown by using a breadth-first expansion over M^trav_j (line 2, Algorithm <ref>). The use of the traversable points constrains the tree to automatically grow over a cloud of reachable candidate configurations. Let n_h denote a node in D and denote by b its corresponding branch, i.e. the path in D which brings from the root to n_h. The utility of n_h ∈ D is computed as the total information gain which would be attained by a robot when moving towards n_h along b. Specifically, the utility U(n_h) of a node n_h ∈ D with parent node n_h-1 is recursively defined as follows: U(n_h) = U(n_h-1) + I(n_h) e^-λσ(n_h), where σ(n_h) = σ(n_h-1) + q_h - q_h-1 represents the geometric length of the path connecting the root of D to the node n_h <cit.>. Such a definition of the utility function entails a receding-horizon optimization approach where, in fact, the “best" branch (and not necessarily the best configuration) is selected (line 8, Algorithm <ref>). Next, the robots moves along a pre-fixed number of edges along the selected branch. §.§ Backtracking Strategy The backtracking strategy is achieved by building and updating the following data structures. * Frontier nodes: nodes whose information gain is greater than a prefixed threshold I_min at the present time. During each BuildSearchTree() expansion, each new node of D that is a frontier node is added to the frontier tree. * Frontier tree Y_j: it is incrementally expanded by attaching each new found frontier node to its nearest neighbor in the tree. To this aim, a k-d tree is efficiently used and maintained. Frontier nodes that are inserted in the frontier tree and subsequently become normal nodes (i.e. with I ≤ I_min) are kept within the frontier tree to preserve its connectivity. * Frontier clusters: Euclidean clusters of close frontier nodes belonging to the frontier tree. These clusters are computed by using a pre-fixed clustering radius. Each cluster has an associated total information gain (the sum of the information gains of all the nodes in the cluster) and a corresponding representative centroid. When a backtracking step is in order (see Algorithm <ref>), first the frontier clusters are built. Then, for each frontier cluster j, the navigation cost-to-go c_j of its centroid (from the current robot position) and its total information gain I_j are computed. Next, the utility of each cluster is computed as: U_j = I_j e^-λ c_j similarly to eq. (<ref>). Finally, the cluster centroid with the maximum utility is selected and planned as next best view node. While moving towards a selected backtracking node, the robot periodically runs the backtracking strategy (Algorithm <ref>) to confirm its last backtracking plan remains valid and informative along the way. This allows the robot to promptly take into account new covered regions (due to new asynchronous teammate scans) and avoid unnecessary travels. §.§ Prioritized Exploration Robot j uses a priority queue P_j in order to store a set of Point Of Interests (POI) with their associated priorities. The higher the priority the more important the POI in P_j. At each exploration step, the most important point of interest is selected in P_j and used to bias the robot search for the next view configuration. In particular, the priority queue can be manually created by the end-user or can be autonomously generated by an object detection module. In this second case, the module automatically detects and classifies objects in the environment and assigns them priorities according to a pre-fixed class-priority table. Let p^* be the POI with the highest priority in P_j. In BuildSearchTree() (see Algorithm <ref>), a randomized A* expansion is biased towards p^*. Specifically, at each step of such A* expansion, a coin toss with probability 0.5 decides if a heuristic function with input p^* is used or not to bias the breadth-first expansion. In this process, we use a simple Euclidean distance as heuristic function h(p). The point p^* is deleted from P_j once the distance between the robot and p^* is smaller than the sensor perception range R_s. § COVERAGE TASK Coverage and exploration tasks have the same objective: robots have to cover the environment with sensory perceptions. But while coverage robots have a full prior knowledge of the environment W (which can be used to plan safe motions), exploration robots must discover W online and restrict their motions within the known terrain. A coverage plan is a view plan π_j (see Sect. <ref>) composed of view configurations q_j^k which are valid and reachable through a safe path from q^0_j (see Sect. <ref>). Note that, given the a priori knowledge of the full environment, a coverage plan π_j = {q_j^k} includes view configurations which are not necessarily exploration-safe at each step k. A team coverage strategy Π = {π_1,...,π_m} collects the coverage plans of the robots in the team. Coverage objective. The robots must cooperatively plan a team coverage strategy Π of minimum length l such that E^l is coverage-maximized. In particular, E^l is coverage-maximized at step l if there is no robot j which can plan a valid configuration q reachable through a safe path from q^0_j such that E^l ⊂ ( E^l ∪ V_j(q)). The team of exploration robots can be used to perform a coverage task. To this aim, we provide them with a point cloud map M representing the full environment. Specifically, at t_0, each robot j sets M_j = M and H_j = ∅. During the coverage task, the volumetric map H_j is grown and maintained as in a normal exploration process: it is incrementally built in order (i) to represent the regions of the environment covered so far and (ii) to compute the information gain of candidate view configurations. The map M_j is used by the path planner to plan safe paths all over the modeled environment terrain. Clearly, such an approach does not take advantage of the full prior knowledge of the environment, which in principle allows to pre-compute an optimal team plan. Nonetheless, our framework can be conveniently used when the environment undergoes low-dynamic changes. In fact, our 3D mapping is capable of continuously integrating detected changes and our online decision-making correspondingly adapts robot behaviour to the new environment model. § RESULTS In this section, we present experimental results we obtained with a ROS-based implementation of the proposed method. Before conducting real-world experiments, we used V-REP as simulation framework <cit.>. V-REP allows to simulate laser range finders, depth cameras, odometry noise, and robot tracks with grousers to obtain realistic robot interactions with the terrain. Fig. <ref> shows a phase of a simulation test with an exploring team of two robots. A video showing a simulated exploration process with 3 robots is available at https://www.youtube.com/watch?v=UDddKJck2yEwww.youtube.com/watch?v=UDddKJck2yE. §.§ Evaluations at the University of Coimbra An analysis of the proposed method via both simulation and real-world experiments was conducted in a Master Thesis project <cit.>. In this work, a team of Pioneer 3-DX robots was used to perform both simulations and real-world experiments (see Fig. <ref>). In particular, the Pioneer robots were equipped with two RGBD cameras, the Orbbec Astra S and the Mynt Eye S1030. Figure <ref> shows one of the testing environments together with one of the obtained maps and the corresponding paths traveled by the robots. A video showing some phases of the performed exploration experiments is available at https://youtu.be/xwCnQtLC0RIyoutu.be/xwCnQtLC0RI. §.§ Evaluations with TRADR UGVs Our exploration framework was also tested on the TRADR UGV robots <cit.> during some TRADR exercises in Rotterdam and Mestre. The TRADR UGVs are skid-steered and satisfy the path controllability assumption. Amongst other sensors, the robots are equipped with a 360^∘ spherical camera and a rotating laser scanner as shown in Fig. <ref>. The full TRADR system was evaluated by the GB (Gezamenlijke Brandweer) firefighters end-users in the RDM training plant in Rotterdam (see Fig. <ref>). In particular, Fig. <ref> shows the TRADR OCU (Operator Control Unit) displaying the GUI with a map and robot camera feedback. The real multi-robot system is implemented in ROS by using a multi-master architecture. In particular, the NIMBRO network <cit.> is used for efficiently transporting ROS topics and services over a WIFI network. Indeed, NIMBRO allows to fully leverage UDP and TCP protocols in order to control bandwidth consumption and avoid network congestion. This design choice was made by the whole TRADR consortium with the aim of developing a robust multi-robot architecture, which can flexibly offer many heterogeneous functionalities <cit.>. We used the same ROS C++ code in order to run both simulations and experiments. Only ROS launch scripts were adapted in order to interface the modules with the multi-master NIMBRO transport layer. We were able to run different exploration experiments and build a map of the plant with two UGVs. Figures <ref> and <ref> show different stages of an exploration process. During those runs the two UGVs started at the same location. In order to attain a multi-robot localization, a background process was designed to globally align the two robot maps in 3D and, in case of success, provide corrections to the mutual position estimates. This system allowed the robots to generate new maps of the facility in minutes. The pose graphs of the two registered maps were mutually consistent (up to a roto-translation) and did not present strong distributed deformations. §.§ Code Implementation All the modules are implemented in C++ and use ROS as a middleware. The code has been designed to seamlessly interface with both simulated and real robots. This allows to use the same code both in simulations and experiments. For the implementation of our exploration agent, we used the ROS package nbvplanner as a starting point <cit.>. This was specifically designed for 3D exploration with UAVs. In particular, in the accompanying paper <cit.>, the Authors present a single-robot strategy. We significantly modified the core of this package in order to manage a team of UGVs, implement our exploration agent algorithm, tightly integrate the agent module with the path planner and traversability analysis, and interface the 3D GUI in our network architecture. An open-source implementation is available at https://github.com/luigifreda/3dmrgithub.com/luigifreda/3dmr. §.§ Software Design A functional diagram of the presented multi-robot system is reported in Fig. <ref>. The main blocks are listed below. The robots, each one with its own ID ∈{1,...,m}, have the same internal architecture and host the on-board functionalities that concern decision and processing aspects both at the topological level and at the metric level. The core services, hosted in the main central computer, represent the multi-robot system persistence. These allow specific modules to load and save maps and robot trajectories (along with other TRADR data structures). The core modules, also hosted in the central computer, include the exploration monitor. This continuously checks the current status of the exploration activities and records relevant data. The multi-robot 3D GUI hosted on one OCU is based on and allows the user (i) to build an exploration fence, which consists of a set of points defining a polygonal bounding region and limiting the explorable area (ii) to visualize relevant point cloud data, maps, and robot models (iii) to stop/restart robots when needed (iv) to trigger the loading/saving of maps and robot trajectories. Each robot hosts its instance of the exploration agent and of the path-planner. Hence, the implemented exploration algorithm is fully distributed. As illustrated in Fig. <ref>, the various modules in the architecture exchange different kinds of messages. These can be mainly grouped into the following types. 0em * Coordination messages: these are mainly exchanged amongst robots in order to achieve coordination and cooperation. For convenience, the exploration monitor records an history of these messages. * GUI messages: these are exchanged with the 3D GUI and include both control messages and visualization data. * Load/save messages: these are exchanged with the core services and contain both loaded and saved data. § CONCLUSIONS We have presented a 3D multi-robot exploration framework for a team of UGVs navigating in unknown harsh terrains with possible narrow passages. We cast the two-level coordination strategy presented in <cit.> in the context of exploration. An analysis of the proposed method through both simulations and real-world experiments was conducted <cit.>. We extended the receding-horizon next-best-view approach <cit.> to the case of UGVs by suitably adapting the expansion of sampling-based trees directly on traversable regions. In this process, an iterative windowed search strategy was used to optimize the computation time. In our tests, the proposed frontier trees and backtracking strategy proved to be crucial to effectively complete exploration experiments without leaving unvisited regions. The approach of separating the resolution of topological and metric conflicts showed to be successful also in this context <cit.>. The resulting strategy is distributed and succeeds to minimize and explicitly manage the occurrence of conflicts and interferences in the robot team. A prioritization scheme was integrated into the framework to steer the robot exploration toward areas of interest on-demand. This design was suggested by TRADR end-users to improve their control of the exploration progress and better integrate human and robot decision-making during a real mission. The presented framework can be used to perform coverage tasks in the case a map of the environment is a priori provided as input. The conducted experiments proved that our exploration method has a competitive performance and allows appropriate management of a team of robots. Decentralization allows task reallocation and recovery when robot failures occur. Notably, the proposed approach showed to be agnostic and not specific to a given sensing configuration or robotic platform. We publish the source code of the presented framework with the aim of providing a useful tool for researchers in the Robotics Community. In the future, we plan to improve the system communication protocol and to validate the framework on larger teams of robots. Moreover, we aim to conduct a comprehensive comparison and performance evaluation of the proposed exploration method against other state-of-the-art 3D exploration approaches under the same conditions. IEEEtran
http://arxiv.org/abs/2307.00997v1
20230703132158
RefSAM: Efficiently Adapting Segmenting Anything Model for Referring Video Object Segmentation
[ "Yonglin Li", "Jing Zhang", "Xiao Teng", "Long Lan" ]
cs.CV
[ "cs.CV", "cs.AI" ]
1 .001 RefSAM: Efficiently Adapting Segmenting Anything Model for Referring Video Object Segmentation Yonglin Li et al. mode = title]RefSAM: Efficiently Adapting Segmenting Anything Model for Referring Video Object Segmentation 1]Yonglin Li[type=editor] 2]Jing Zhang[type=editor, orcid=0000-0001-6595-7661] 1]Xiao Teng[type=editor, orcid=0000-0002-8012-2088] 1]Long Lan[type=editor, orcid=0000-0002-4238-8985] [1] [1]organization=Institute for Quantum Information & State Key Laboratory of High Performance Computing, College of Computer Science and Technology, National University of Defense Technology, city=Changsha, postcode=410073, country=China [2]organization=School of Computer Science, University of Sydney, city=Sydney, postcode=2008, country=Australia [cor1]Corresponding author The Segment Anything Model (SAM) has gained significant attention for its impressive performance in image segmentation. However, it lacks proficiency in referring video object segmentation (RVOS) due to the need for precise user-interactive prompts and limited understanding of different modalities, such as language and vision. This paper presents the RefSAM model, which for the first time explores the potential of SAM for RVOS by incorporating multi-view information from diverse modalities and successive frames at different timestamps. Our proposed approach adapts the original SAM model to enhance cross-modality learning by employing a lightweight Cross-Modal MLP that projects the text embedding of the referring expression into sparse and dense embeddings, serving as user-interactive prompts. Subsequently, a parameter-efficient tuning strategy is employed to effectively align and fuse the language and vision features. Through comprehensive ablation studies, we demonstrate the practical and effective design choices of our strategy. Extensive experiments conducted on Ref-Youtu-VOS and Ref-DAVIS17 datasets validate the superiority and effectiveness of our RefSAM model over existing methods. The code and models will be made publicly at https://github.com/LancasterLi/RefSAMgithub.com/LancasterLi/RefSAM. Video Object Segmentation Vision Transformer Language and Vision Segment Anything Deep Learning Transfer Learning [ [ August 1, 2023 ================== § INTRODUCTION Referring video object segmentation (RVOS) aims to segment the target object in the video with the guidance of given language expressions <cit.>. It has attracted widespread attentions in the computer vision community due to its great potential in real world applications, such as video retrieval and video editing <cit.>. It is a challenging task and the key lies in how to incorporate multi-view information from video frames at different timestamps and align sources from different modalities, i.e., vision and language views. Compared with video object segmentation <cit.>, RVOS is more challenging as it is required to perform semantic-level alignment between vision and language modalities without the ground-truth mask annotation in the first frame. Although many efforts have been devoted to tackling this task <cit.>, it still remains a challenge how to accurately identify and segment the target object in the video by fully exploiting the semantic meanings of sources from vision and language views. Recently, the Segment Anything Model (SAM) <cit.> has been proposed, which serves as a foundation model for image segmentation. Based on the designed prompt engineering, it can be transferred to the new task in a zero-shot manner. Due to its impressive performance on image segmentation, many works have explored to apply SAM in other related fields <cit.>, e.g., medical image analysis <cit.>, remote sensing images <cit.>, video object tracking <cit.>, style transfer <cit.>, and 3D reconstruction <cit.>. Although these methods have achieved great progress on these related tasks based on SAM, none of them can be applied to RVOS directly due to the inherent complexity of this task. Concretely, RVOS requires the model to have the comprehensive understanding of sources from vision and language modalities. Unlike video object segmentation (VOS), no ground-truth mask annotations in the first frame are available in RVOS. It's worth noting that although some SAM-based multiple object tracking models such as SAM-Track <cit.> and TAM <cit.> can also be adapted to the task of RVOS by providing bounding boxes of the first frame detected by extra object detection models, such as Grounding DINO <cit.>, some non-negligible issues can exist in such frameworks. For example, they cannot work effectively in an end-to-end manner due to the two-stage pipeline. As a result, inaccurate bounding boxes in the first frame detected by the object detection model can result in the poor performance of the downstream segmentation task. On the other hand, the involvement of separate models can bring extra difficulties in the model training and deployment process. Thus, a natural question arises that how to effectively adapt SAM to the RVOS task in an end-to-end manner to fully unleash its potential capacity for segmentation. In this paper, we conduct the initial exploration and propose the RefSAM, which is the first end-to-end SAM-based framework for the task of RVOS. Based on the powerful foundation model SAM, RefSAM can perform accurate target object segmentation in the video with given language expressions. Specifically, to enhance the cross-modality learning capability of the original SAM, we propose a lightweight Cross-Modal MLP that projects the text embedding of the referring expression into sparse and dense embeddings, serving as user-interactive prompts like points and bounding boxes prompts. Subsequently, a parameter-efficient tuning strategy is employed to effectively align and fuse the language and vision features. Our contributions can be concluded as follows. ∙ We conduct the pioneering study to explore SAM's potential for RVOS through the integration of multi-view information from diverse modalities and successive frames at different timestamps. ∙ We introduce RefSAM, a novel approach that utilizes lightweight modules and an efficient fine-tuning strategy to align and fuse language and vision features in an end-to-end learning manner, thereby effectively adapting SAM for RVOS. ∙ Extensive experiments on Ref-Youtu-VOS and Ref-DAVIS17 datasets demonstrate the promising results of RefSAM and its superiority compared to existing methods. § RELATED WORK §.§ Referring Video Object Segmentation RVOS uses language cues to segment an object in a video. Unlike the VOS task which provides the ground truth bounding box of the first frame, it exploits a different type of guidance, i.e., language expressions, to identify and segment the object referred by the given language expression in a video. Therefore, the task is more challenging due to the great domain gap between language and vision modalities. <cit.> first proposes the RVOS task and extends the A2D <cit.> and J-HMDB <cit.> datasets with sentences describing the actors and actions appearing in the video content. <cit.> extends language grounding models to video data to ensure temporally coherent predictions and augment Davis datasets <cit.> with language descriptions. URVOS <cit.> proposes a unified end-to-end deep neural network that performs both language-based object segmentation and mask propagation in a single model and constructs a large-scale RVOS dataset Refer-Youtube-VOS. RefVOS <cit.> leverages the last stage feature of the vision backbone to fuse with the textual feature. ClawCraneNet <cit.> leverages three kinds of object-level relations to progressively construct discriminative visual embedding. CITD <cit.> puts forward a two-stage, top-down RVOS solution. MRSA <cit.> presents a multi-level visual representation framework that consists of video, frame, and object granularity. MMVT <cit.> incorporates the motion information from optical flow maps with appearance and linguistic features for text-based video segmentation. LBDT <cit.> proposes a Language-Bridged Duplex Transfer module to conduct spatial-temporal interaction explicitly between two independent 2D ConvNets in the encoding phase for RVOS. PMINet <cit.> performs multimodal interaction in each stage of the visual backbone, and multimodal feature is incorporated back into the visual backbone to guide the progressive learning of visual features. <cit.> utilizes implicitly individual word features to integrate visual features. These models still do not achieve outstanding performance due to the significant semantic gap between visual information and language information. Vision Transformers (ViTs) <cit.> have attracted lots of attention recently and have been widely used in various domains <cit.>, including the field of RVOS. MTTR <cit.> models the RVOS task as a parallel sequence prediction problem and outputs all objects in the video prior to selecting the one referred by the text. ReferFormer <cit.> introduces a small set of object queries that are conditioned on the text expression to attend to the referred object only. VLT <cit.> utilizes word features for query generation and spatial dynamic fusion. Different from these models, we present a simple yet effective solution by adapting SAM to RVOS via some minimal designs. §.§ Segment Anything Model Recently, SAM <cit.> build a foundation model for segmentation. The model designs a promptable segmentation task and it can transfer zero-shot to new image distributions and tasks. Following that, a large number of research studies based on SAM emerged. SAM has been widely applied in various fields <cit.>, e.g., medical image analysis <cit.>, remote sensing images <cit.>, video object tracking <cit.>, style transfer <cit.>, and 3D reconstruction <cit.>. In the medical image analysis domain, SAMed <cit.> involves customizing the SAM model for medical image segmentation by applying the low-rank-based (LoRA) fine-tuning strategy to the SAM image encoder which can perform better for semantic segmentation tasks on medical images. <cit.> demonstrates that SAM can achieve high segmentation accuracy for brain tumor MRI datasets in a point-to-mask setting. <cit.> assesses the zero-shot segmentation performance of the SAM model on representative segmentation tasks in whole slide imaging. In the remote sensing images domain, <cit.> introduces a domain-specific decoder, which learns the problem-specific semantics with only five labeled images. <cit.> develops an efficient pipeline with SAM to generate a large-scale remote sensing segmentation dataset named SAMRS. In the style transfer domain, Any-to-Any Style Transfer <cit.> enables users to specify which style region to select and which content regions to apply during style transfer. In the 3D reconstruction domain, SA3D <cit.> can segment any object in a 3D scene with one-shot manual prompting in a single rendered view. These advancements have shown the versatility of the foundation segmentation model. PerSAM <cit.> proposes a training-free Personalization approach and a one-shot fine-tuning variant that only trains 2 parameters within 10 seconds for improved performance. HQ-SAM <cit.> designs a learnable High-Quality output token which injected into SAM's mask decoder to predict the high-quality mask. These models all focus on the improvement of SAM in the vision field. SAM-Track <cit.> and TAM <cit.> enable users to select multiple objects in videos for tracking <cit.>. However, none of these models are currently available for the RVOS task. In this paper, we propose an end-to-end RVOS model based on SAM for the first time. § REFSAM FOR REFFERING VIDEO OBJECT SEGMENTATION First, we present the preliminaries of SAM in Sec. <ref>. Then, in Sec. <ref>, we provide an overview of the proposed RefSAM model. More details of each component of RefSAM are presented in the remaining sections. §.§ Preliminaries Task Definition: Given a video clip ℒ = {I_t}_t=1^T with T frames and a referring expression E = { e_l}_l=1^L with L words, the goal of RVOS is to produce T-frame binary segmentation masks S = {s_t}_t=1^T, s_t∈ℝ^H × W of the referred object. Architecture of SAM: SAM <cit.> mainly consists of three components: an image encoder, a prompt encoder, and a mask decoder. The image encoder is a ViT-based <cit.> backbone for image feature extraction and can be applied prior to prompting the model. The prompt encoder encodes two sets of prompts, i.e., sparse (points and boxes) and dense (masks) prompts, to get interactive positional information and provide them to the mask decoder. The two-layer transformer-based mask decoder leverages the extracted image embedding, prompt embedding, and the learnable output and prompt tokens for final mask prediction. SAM demonstrates strong zero-shot generalization in the segmentation task. However, SAM is not skilled in utilizing text for segmentation and its training is very expensive. For example, training the ViT-H-based SAM model on SA-1B <cit.> requires 256 GPUs with a large batch size of 256 images. §.§ Overview of RefSAM's Architecture We introduce the RefSAM model to efficiently adapt SAM to the RVOS task and boost the potential segmentation capacity of SAM. As shown in Figure <ref>, it mainly consists of five key components: Visual Encoder, Text Encoder, Cross-Modal MLP, Dense Attention, and Mask Decoder. Firstly, we use the Visual Encoder of SAM to extract frame features as visual embeddings. Meanwhile, we use the text-to-text model (T5) <cit.> as the Text Encoder to extract linguistic embeddings. Then, we construct cross-modal Sparse Embeddings and Dense Embeddings to learn text-visual information and predict masks through the Cross-Modal MLP module. Next, RefSAM utilizes the Dense Attetion module to fuse visual embeddings and sparse embeddings in order to obtain dense embeddings. Finally, the Mask Decoder of SAM leverages sparse embeddings, dense embeddings, and visual embeddings for final mask prediction. During training, RefSAM reuses the pre-trained weights of SAM and text encoder T5 while only fine-tuning three modules, i.e., Cross-Modal MLP, Dense Attention, and Mask Decoder, enabling parameter-efficient fine-tuning. During inference, we directly output the mask predictions by selecting the masks with the highest score as the final results. §.§ Backbone §.§.§ Visual Encoder We start by adopting the image encoder of SAM as the visual encoder ℰ_v to extract the visual feature maps for each frame in a video clip, as shown in the light blue part in the top of Figure <ref>. The image encoder ℰ_v is an MAE <cit.> pre-trained ViT backbone <cit.>. Specifically, for each frame I_t in the video clip ℒ = {I_t}_t=1^T, the vision encoder ℰ_v is adopted to extract the feature map for this frame, i.e., f_t = ℰ_v(I_t)∈ℝ^C_v× H_0× W_0, where f_t is the corresponding feature map of frame I_t. By applying the vision encoder ℰ_v on each frame independently, a set of feature maps ℱ_v = {f_t}_t=1^T can be obtained for T frames in the video clip. As SAM has strong zero-shot segmentation performance, we freeze the parameters of the image encoder ℰ_v to retain its feature extraction capability in the training process. §.§.§ Text Encoder At the same time, given the referring expression with L words, we utilize a large language model text-to-text model (T5) <cit.> as the text encoder ℰ_t to get the corresponding linguistic embeddings, as shown in the light green part in the bottom of Figure <ref>. Specifically, given the referring expression E = { e_l}_l=1^L, these words are firstly tokenized as T = { t_l}_l=1^L. Then, we put these tokens into the text encoder to obtain the final embeddings. As our text encoder ℰ_t is reused from the T5 model, we extract the feature vector after the last hidden layer of the encoder in the T5 model as the word embedding, i.e., ℱ_e=ℰ_t(E) ∈ℝ^L × C_e. Here ℱ_e is a sequence of C_e-dimensional embeddings of L words, i.e., ℱ_e={f_i}_i=1^L, where each word is represented by a C_e-dimensional embedding. Then the sentence-level embedding f_e^s∈ℝ^C_e can be obtained by simply applying a pooling operation on these word embeddings. §.§ Cross-modal MLP Based on the Text Encoder, we get the linguistic embeddings, including word and sentence embeddings. However, there is a significant semantic gap between linguistic embedding space and visual embedding space. Following MiniGPT-4 <cit.>, we design a cross-modal MLP which consists of one hidden layer L_s to effectively align the linguistic embedding space and visual embedding space, as shown in the green part in the bottom of Figure <ref>. Specifically, for each word embedding f_i in ℱ_e, the sparse embedding can be obtained by adopting the cross-modal MLP L_s, i.e., f_i^s = L_s(f_i) ∈ℝ^C_v. In this way, sparse embeddings for L words and the sentence can be obtained, which can be represented as ℱ_sparse={f_j^s}_j=1^L and f_e^'^s∈ℝ^C_v, respectively. §.§ Dense Attention To enhance the visual features and provide additional cues for the mask decoder, we introduce the Dense Attention Module to fuse visual and linguistic features, as shown in the orange part in the top middle of Figure <ref>. Firstly, we concatenate the aligned sentence embeddings f_e^'^s and sparse embeddings ℱ_sparse={f_j}_j=1^L together: ℱ_fix =(f_e^'^sℱ_sparse) ∈ℝ^(N+1) × C_v. In the next step, a similarity calculation between ℱ_fix and visual embeddings ℱ_v is performed to get spatial attention ℱ_sp. Then, the dot product is applied between ℱ_sp and ℱ_fix to get spatial language attention, i.e., ℱ_sl=ℱ_sp·ℱ_fix. Then, we concatenate the spatial language attention ℱ_sl and visual embeddings ℱ_v and apply a convolutional layer to reduce the number of channels to match the channel dimension of visual embeddings, i.e., ℱ_dense=Conv(ℱ_slℱ_v). Here ℱ_dense is a sequence of dense embeddings for T frames in the video clip, i.e., ℱ_dense = {f_k}_k=1^T, where f_k∈ℝ^C_v× H_0× W_0 is the dense embedding for the kth frame. §.§ Mask Decoder The mask Decoder of the vanilla SAM leverages sparse embeddings (point and box) from the prompt encoder and dense embeddings (mask) from the SAM predictor to get final predictions. Following this principle, we construct sparse embeddings and dense embeddings with the same shape which encode useful visual and linguistic features from the Cross-Modal MLP and Dense Attention module. Then we put sparse embeddings ℱ_sparse and dense embeddings ℱ_dense together with visual embeddings ℱ_v into the Mask Decoder to get the mask predictions, i.e., M = Decoder(ℱ_sparse, ℱ_dense, ℱ_v), where M is the output of the Mask Decoder. Finally, the Mask Decoder uses the output M with the highest score as the predicted mask. § EXPERIMENTS §.§ Experiment Settings §.§.§ Datasets We conduct experiments on two challenging referring video object segmentation datasets: Ref-Youtube-VOS <cit.> and Ref-DAVIS17 <cit.>. Ref-Youtube-VOS is a large-scale referring video object segmentation dataset, which contains around 15K referring expressions for 3,900+ videos. Each video has pixel-level instance segmentation annotations for every fifth frame. Ref-DAVIS17 is built upon DAVIS17 <cit.> by providing the language description for each specific object in each video and contains 90 videos. We follow the common practice and use the default split for training and testing. §.§.§ Evaluation Metrics The primary evaluation metrics of Ref-Youtube-VOS and Ref-DAVIS17 are the average of the region similarity (𝒥) and the contour accuracy (ℱ), denoted by 𝒥&ℱ. For Ref-Youtube-VOS, since the annotations of the validation set are not publicly available, we evaluate our method on the official challenge server[https://codalab.lisn.upsaclay.fr/competitions/3282https://codalab.lisn.upsaclay.fr/competitions/3282]. As for Ref-DAVIS17, it is evaluated using the official evaluation code[https://github.com/davisvideochallenge/davis2017-evaluationhttps://github.com/davisvideochallenge/davis2017-evaluation]. §.§ Implementation Details §.§.§ Model Settings We use the image encoder of SAM, a ViT backbone <cit.> as our visual encoder. For the text encoder, we use the Hugging Face <cit.> implementation of T5-3b <cit.>. The Cross-Modal MLP consists of one hidden layer, which employs the rectified linear unit (ReLU) activation function. The convolution layer we use in Dense Attention applies a set of 512 1 × 1 filters to the input feature maps, resulting in 256 output feature maps. We freeze the parameters of the Visual Encoder and Text Encoder during the entire training stage. We use the last stage features from the Visual Encoder as the input to Dense Attention Module and Mask Decoder. The dimension of the input embeddings (visual embeddings, sparse embeddings, and dense embeddings) to the Mask Decoder is 𝒞 = 256. §.§.§ Training and Inference Details Follow <cit.>, our experiments follow the pre-training and fine-tuning process. we perform pre-training on the Ref-COCO dataset <cit.> for 15 epochs. In practice, we did not reach the optimal number of epochs, but it was sufficient for practical use, demonstrating the efficiency of our method. And then, we fine-tune it on Ref-Youtube-VOS for 4 epochs. The data augmentation includes random horizontal flip, random resize, random crop, and photometric distortion. We employ Dice loss and Focal loss as our primary loss functions. Our model is implemented in PyTorch. Training is conducted on 8 NVIDIA Tesla A100 GPUs. During inference, the video frames are down-scaled to 360p. We directly output the predicted segmentation masks without using any post-processing tricks. §.§ Main Results §.§.§ Ref-DAVIS17 We conduct experiments on the Ref-DAVIS17 dataset, and the results are shown in Table <ref>. We can find that our method RefSAM can achieve 66.1 in terms of 𝒥 & ℱ, which is 4.5 points higher than the state-of-the-art method VLT <cit.>, which verifies the superiority of our method. It is attributed to the excellent capability of object segmentation and cross-modal understanding. Consequently, by taking advantage of the powerful foundation model SAM, our RefSAM can correctly identify and accurately segment the target object in the video clip. To have a closer look at the effectiveness of our method, we also compare our method with some recent SAM-based methods. Since there are no off-the-shelf SAM-based models for RVOS, existing SAM-based methods, such as SAM-Track <cit.> and PerSAM <cit.>, can be adapted to RVOS task by providing bounding boxes in the first frame. Specifically, simple SAM-based RVOS baselines can be derived by combining SAM-Track or PerSAM with Grounding Dino <cit.>, which is an object detector that can provide bounding boxes of objects in the first frame with referring expressions. As shown in Table <ref>, these naive solutions exhibit satisfactory performance, surpassing the CMSA and URVOS methods, yet significantly lagging behind our RefSAM model. Firstly, these methods employ a two-stage pipeline that can lead to a sub-optimal performance in the downstream segmentation task due to inaccurate bounding boxes detected by the object detector in the initial frame. Secondly, the absence of specific fine-tuning on the RVOS dataset poses a challenge, primarily due to the substantial model size and the inherent two-stage design. It is also noteworthy that the involvement of individual models for each sub-task also brings extra difficulties for model deployment. These findings underscore the efficacy of our RefSAM's end-to-end design and parameter-efficient tuning strategy. §.§.§ Ref-Youtube-VOS We also conduct experiments on the Ref-Youtube-VOS dataset, and the results are summarized in Table <ref>. Our RefSAM model achieves a notable performance of 55.1 in terms of 𝒥 & ℱ, surpassing the PerSAM and SAM-Track baselines. Additionally, it outperforms previous methods such as CMSA, URVOS, and PMINet, and demonstrates comparable performance to MTTR, albeit falling short of the state-of-the-art ReferFormer method on this dataset. It is important to note that our model solely utilizes the last stage feature of the visual encoder as visual embeddings, while ReferFormer employs all features from the last three stages to form multi-scale visual embeddings, which proves advantageous in segmenting objects of varying scales. In the future, we plan to improve our model by investigating more advanced designs. §.§ Ablation Study We conduct detailed ablation studies on the proposed RefSAM using ViT-B as the backbone, analyzing the impact of the key modules and the influence of hyper-parameters. §.§.§ Effect of the Key Modules RefSAM employs a Cross-Modal MLP and Dense Attention for referring video object segmentation. Table <ref> summarizes the results of our baseline model when excluding some of the key modules. In each experiment, we first pre-train the models on Ref-COCO for 15 epochs and then fine-tune them on Ref-Youtube-VOS for 4 epochs. We evaluate them on Ref-Davis. Compared to our baseline, without using the Cross-Modal MLP and Dense Attention will lead to a performance drop of 1.56 J&F-Mean and 2.38 J&F-Mean, respectively, validating their importance in aligning and fusing visual and language features. §.§.§ Influence of Hyper-parameter Settings Figure <ref> presents the results of different learning rate settings for the learnable modules of RefSAM, including the Cross-Modal MLP, the Dense Attention, and the Mask Decoder. As can be seen, RefSAM converges faster and better when the learning rate is set to 1e-4 for the Cross-Modal MLP and Dense Attention, and 1e-6 for the Mask Decoder. Furthermore, we study the influence of the number of hidden layers in the Cross-Model MLP. As shown in Figure <ref>. As can be seen, using a single hidden layer delivers faster and better convergence, which is the default setting. In Figure <ref>, we investigate the choice of linguistic feature in Dense Attention. As can be seen, only using the word feature leads to a slightly better result, which is the default setting. §.§ Influence of Model Size We then investigate the influence of model size, specifically the performance of using different sizes of Visual Encoder, including ViT-B, ViT-L, and ViT-H. For each model, we pre-train it on Ref-COCO for 15 epochs, then fine-tune it on Ref-Youtube-VOS for 4 epochs, and evaluate it on Ref-DAVIS17. We do not use the data augmentation described in Sec. <ref>. The results are shown in Figure <ref> and Table <ref>. The findings indicate that employing a larger backbone for visual feature extraction yields superior performance compared to a smaller one, attributed to the enhanced representation capacity of larger vision transformers. Furthermore, the results exhibit a consistent improvement in performance with increasing model size, highlighting the good scalability of our RefSAM model. §.§ Visualization Results We show the visualization results of our RefSAM model in Figure <ref>. It can be seen that RefSAM is capable of effectively segmenting and tracking the referred object even in challenging scenarios, such as variations in person poses, and occlusions between instances. Furthermore, we present the results of differnt models in Figure <ref>. It is clear that our RefSAM demonstrates significantly enhanced cross-modal understanding capability, particularly evident in handling vague language descriptions (Figure <ref>) and resolving appearance ambiguity among similar objects (Figure <ref>). §.§ Model Complexity Analysis In this section, we perform an analysis of the complexity of various models. As depicted in Table <ref>, our model exhibits a notably smaller number of learnable parameters in comparison to all other models, highlighting the inherent nature of parameter-efficient tuning employed during the training of our RefSAM model. Furthermore, Table <ref> presents the inference speed of various models. It is apparent that our RefSAM exhibits a marginally slower runtime compared to ReferFormer, yet outperforms SAM-Track and PerSAM in terms of inference speed. When employing a ViT-B as the visual encoder, RefSAM achieves a faster inference speed of 9.2 FPS, outperforming all other methods. § CONCLUSION This study pioneers the adaptation of the foundational segmentation model, Segment Anything Model, to the referring video object segmentation task. We propose the novel RefSAM model, which incorporates efficient designs to effectively bridge the semantic gap between the visual and language domains, significantly enhancing the cross-modal understanding capability of SAM. By employing a parameter-efficient tuning strategy, RefSAM achieves efficient training through adjustments to a small number of learnable parameters. Extensive experiments on Ref-Youtu-VOS and Ref-DAVIS17 datasets validate the superior effectiveness of our RefSAM model compared to existing methods. model1-num-names
http://arxiv.org/abs/2307.01283v1
20230703181353
Wavefunction tomography of topological dimer chains with long-range couplings
[ "F. Pellerin", "R. Houvenaghel", "W. A. Coish", "I. Carusotto", "P. St-Jean" ]
physics.optics
[ "physics.optics", "cond-mat.mes-hall" ]
Département de Physique, Université de Montréal, C.P. 6128, Succursale Centre-Ville, Montréal, Québec, Canada H3C 3J7 Département de Physique, Université de Montréal, C.P. 6128, Succursale Centre-Ville, Montréal, Québec, Canada H3C 3J7 Department of Physics, McGill University, 3600 rue University, Montreal, Qc H3A 2T8, Canada Pitaevskii BEC Center, INO-CNR and Dipartimento di Fisica, Università di Trento, via Sommarive 14, I-38123 Trento, Italy Département de Physique, Université de Montréal, C.P. 6128, Succursale Centre-Ville, Montréal, Québec, Canada H3C 3J7 The ability to tailor with a high accuracy the inter-site connectivity in a lattice is a crucial tool for realizing novel topological phases of matter. Here, we report the experimental realization of photonic dimer chains with long-range hopping terms of arbitrary strength and phase, providing a rich generalization of the celebrated Su-Schrieffer-Heeger model. Our experiment is based on a synthetic dimension scheme involving the frequency modes of an optical fiber loop platform. This setup provides direct access to both the band dispersion and the geometry of the Bloch wavefunctions throughout the entire Brillouin zone allowing us to extract the winding number for any possible configuration. Finally, we highlight a topological phase transition solely driven by a time-reversal-breaking synthetic gauge field associated with the phase of the long-range hopping, providing a route for engineering topological bands in photonic lattices belonging to the AIII symmetry class. Wavefunction tomography of topological dimer chains with long-range couplings P. St-Jean August 1, 2023 ============================================================================= Introduction – Engineering materials with specific topological properties requires an acute control over the hybridization of electronic orbitals <cit.>. As was pioneered by the Haldane model <cit.>, introducing next-nearest-neighbor coupling terms with arbitrary phases strongly enriches the variety of phenomena that can be observed in topological band models <cit.>. Furthermore, such long-range connectivity is expected to facilitate the stabilization of strongly correlated states of matter <cit.>. The experimental implementation and control of sizable hopping terms extending beyond nearest neighbors is typically not a straightforward task <cit.>. In usual realizations of lattice models based on condensed matter or ultracold atomic systems, hopping typically occurs via tunneling processes mediated by the spatial overlap of wavefunctions at different sites and is therefore dominated by short-range processes <cit.>. The situation is very different if we consider lattices extending along synthetic dimensions <cit.>. Here, one or more of the spatial coordinates are replaced by some other internal degrees of freedom such as spin or linear momentum in ultracold atomic gases <cit.>, or, in photonic systems, frequency <cit.>, angular momentum <cit.>, spatial <cit.>, or temporal modes <cit.>. In the specific case of synthetic photonic lattices, a wide variety of hopping terms can be engineered through suitable modulation of the relevant degrees of freedom <cit.>, which has led to pioneering achievements such as the observation of the four-dimensional quantum Hall effect <cit.>, the non-Hermitian skin effect <cit.>, non-abelian excitations <cit.>, and treelike photonic networks <cit.>. In this Letter, we use the frequency of the photon modes in an optical fiber loop as a synthetic dimension <cit.> to experimentally engineer lattices with topological bands and tunable long-range hopping terms. In particular, we realize one-dimensional dimerized lattices where the two sites within each unit cell are encoded in the symmetric and antisymmetric combinations of clockwise and counter-clockwise eigenmodes of a single loop. Selective hopping processes between specific pairs of sites at arbitrary distance are then introduced through a dynamical modulation of the optical fiber at a frequency resonant with their frequency difference <cit.>. This offers full control over the magnitude and the phase of the hopping terms. When only nearest-neighbor couplings are present, our lattice realizes the well-known Su-Schrieffer-Heeger (SSH) model <cit.>, which displays two topologically distinct phases associated with the integer-valued winding number being 𝒲=0,1. A wider variety of phases with 𝒲 ranging from -1 to +2 is realized by adding 3^rd-nearest-neighbor hopping terms with specific amplitudes <cit.>. The entire phase diagram of this Hamiltonian is reconstructed in terms of the winding number that we experimentally extract using a wavefunction tomography technique for the Bloch modes. Finally, we report the generation of a time-reversal-breaking synthetic gauge field, which allows to change the band topology without modifying the strength of the couplings—a stringent prerequisite in the conventional SSH model. The topological model – We consider in this work one-dimensional dimerized chains that present chiral symmetry, i.e. where the two atoms in each unit cell are identical and hopping processes only connect atoms in different sublattices (see Fig. <ref> (a)). Under the tight-binding approximation, the generic Hamiltonian describing such lattices is given by: H(k)=([ 0 g(k); g^*(k) 0 ]), where the off-diagonal term g(k)=|g(k)|e^iϕ(k) describes the hopping terms in Fourier space. Diagonalization of this Hamiltonian gives a band dispersion E_±(k)=± |g(k)|, and a phase difference ϕ(k) for the components of the Bloch modes on the two sublattices: |k_±⟩ = 1/√(2)[ 1; ± e^-iϕ(k) ]. In this framework, the topological phases are characterized by plotting the trajectory of g(k) in the complex plane throughout the Brillouin zone. The number of times the trajectory winds around the origin as k spans the Brillouin zone is called the winding number 𝒲 and is linked to the number of edge states present at the boundaries of the lattice. Although the winding number depends on the definition of the unit cell and is thus ill-defined for infinite lattices, it can be identified unambiguously for finite lattices where the definition of the unit cell is imposed by how the chain is terminated. The simplest such configuration, the SSH model, has (potentially complex) nearest-neighbor hopping amplitudes only: g(k)=a+b^*e^+ikl with l the lattice constant. This model hosts a winding number 𝒲=0 and 1 for |a|>|b| and |a|<|b| respectively (Fig. <ref> (b)). With the addition of 3^rd-nearest-neighbor hopping terms to the SSH Hamiltonian, the off-diagonal term becomes: g(k) = a + b^*e^+ikl + ce^-ikl + d^*e^+2ikl. As a non-trivial consequence of these longer-range couplings, the winding number can now take values ranging from -1 to +2 upon varying the ratios between the different hopping amplitudes, see Fig. <ref> (c) for the cases where a=b^*, and ratios c/a and d^*/a are real, equivalent to a model with real hopping parameters after a gauge transformation g(k)→ e^-iϕ_ag(k) (ϕ_a=arg(a)). This choice reproduces the phase diagram from Ref. Maffei2018. Our aim is to build a photonic platform that allows us to simulate arbitrary dimer-chain configuration, even including complex c/a and d^*/a ratios, and to extract 𝒲 by measuring g(k) throughout the Brillouin zone. The synthetic photonic lattice– In our experiment, we use the frequency of photons confined in an optical fiber loop as a synthetic dimension. The underlying principle of this approach, inspired by Refs. <cit.>, is to emulate the spatial periodicity of a lattice by exploiting the periodicity in frequency of the cavity spectrum with a period given by the free spectral range (FSR) l=Ω. Coupling between specific eigenmodes is realized by locally modulating the refractive index of the cavity material with electro-optic phase modulators (EOMs) driven at a frequency equal to the corresponding mode spacing. In order to create the alternating hopping terms of a dimer chain, it is necessary to engineer a cavity with two different frequency splittings and to drive them independently. To do this, we use a single fiber loop and couple the degenerate clockwise (CW) and counter-clockwise (CCW) eigenmodes using a 75:25 optical fiber coupler (see Fig. <ref> (d)). The resulting hybridized modes are symmetric and antisymmetric superpositions of the CW and CCW modes: |m, ±⟩ = 1/√(2)(|m, CCW⟩±|m, CW⟩), where m is the index of the uncoupled modes. In our setup, the splitting between |m, ±⟩ is δ/2π = 3.43 MHz and the FSR is Ω /2π = 10.03 MHz (Fig. <ref> (e)). In order to further optimize the coupling efficiency between these eigenmodes, we use a pair of circulators that spatially separate the CW and CCW modes allowing for an independent modulation of each of them. In particular, by driving the EOMs with the same electrical signal amplitude but with a π phase shift, V_cw(t)=-V_ccw(t), we maximize the coupling between states of opposite parity while suppressing all other couplings <cit.>. This choice enforces chiral symmetry by suppressing hopping processes between sites belonging to the same sublattice. SSH lattices – As a first application of our scheme, we realize an SSH Hamiltonian by driving the EOMs with a bichromatic signal V_ccw/cw(t) = ±i/2 (V_a e^-iδ t + V_b e^-i(Ω-δ) t) + c.c. where ± accounts for the π phase shift between the two modulators on the CCW and CW paths. This modulation gives rise to effective hopping terms a=iη V_a and b^*=iη V_b^*, where η is related to the electro-optical constant and has units of rad s^-1V^-1 <cit.>. To measure the band dispersion, we probe the time-resolved transmission of the cavity using a high-bandwidth photodiode while scanning the frequency of a continuous-wave excitation laser. When the laser is in resonance with a synthetic Bloch mode |k_±⟩ and the cavity decay rate γ is much smaller than the Bloch bandwidth, the transmitted field intensity consists of a train of narrow pulses with an overall intensity modulation of frequency δ: I_(±)(t)/|F|^2 = [1- κ/γ(2π/Ω) D_T(t-k)(1 ±cos(δ t - ϕ(k)) ], where F is the input field amplitude, κ is the input-output coupling strength, and D_T(t)=∑_n δ(t-n T) is a Dirac comb with period T=2π/Ω. Interestingly, in our synthetic-dimension scheme, the effective crystal momentum k has units of time and corresponds to the pulse arrival time <cit.> and T is the size of the effective Brillouin zone. A detailed derivation of Eq. (<ref>) and a discussion of the underlying assumptions can be found in the supplement, Ref. <cit.>. The slow modulation arises from the fact that we measure light transmitted from the CCW mode and the Bloch eigenmodes consist of linear superpositions of symmetric and anti-symmetric combinations of CW and CCW modes that oscillate at different frequencies. Hence, the signal exhibits a beating at the frequency difference ω_-- ω_+=δ. Importantly, the phase of this modulation is exactly the relative phase of the sublattice amplitudes in the Bloch modes, allowing experimental access to the phase of the wavefunction in Eq. (<ref>) at every k point. Figures <ref> (a) and (b) show the transmitted intensity as a function of time (averaged over multiple Brillouin zones to erase the effect of the modulation) and as a function of the laser detuning for the cases |a|>|b| and |a|<|b|, corresponding to the trivial and topological phases of the SSH model, respectively. In both cases, we clearly observe the two bands of the SSH Hamiltonian with a well-defined gap, on the order of E_g∼ 200 kHz for the parameters of the experiment. The different topology of these two cases can be highlighted by probing the trajectory of g(k) in the complex plane: |g(k)| is extracted from the measurement of the band structure, and the phase ϕ(k)=arg(g(k)) is obtained by Fourier transforming the slow modulation of the output signal <cit.>. Figure <ref> (e) reports this slow modulation, as a function of time and k-vector, for both the trivial and topological phases. Extracting the phase of this modulation at every k, we can track the trajectory of g(k) throughout the entire Brillouin zone (Fig. <ref> (c)-(d)), which winds around the origin in the topological case (d) but not in the trivial case (c). Generalized SSH lattices– Having demonstrated the ability to extract both the band structure and the Bloch wavefunctions for SSH chains, we now examine the impact of 3^rd-nearest-neighbor hopping amplitudes. In our platform, long-range hopping processes are straightforwardly implemented by adding appropriate higher-frequency components to the signal sent to the EOMs: V_ccw,cw(t) = ±i/2(V_a e^-iδ t + V_b e^-i(Ω-δ) t + V_c e^-i(Ω + δ)t + V_d e^-i(2Ω - δ)t ) + c.c., where c=- iη V_c and d^*=-iη V_d^* <cit.>. Here we consider cases with a=b^* and only the ratios c/a and d^*/a are changed. Keeping these ratios real, this allows exploring the full phase diagram presented in Fig. <ref> (c). Figure <ref> presents the experimental data for different ratios of long-range to nearest-neighbor hopping amplitudes. Each panel presents a specific case corresponding to one of the possible values of the winding number in the topological phase diagram: 𝒲=-1 (panel a), 𝒲=0 (b), 𝒲=+1 (c) and 𝒲=+2 (d). On the right of each panel, we indicate with a red star the position in the phase diagram of Fig. <ref> (c) and we give the effective hopping amplitudes used in the experiment. The experimental measurements of the band structure (top left of each panel) and of the trajectory of g(k) (top center) are obtained in a similar fashion as for the previous SSH case. Finally, the bottom row presents tight-binding calculations of the band structure and the g(k) trajectory using the same hopping ratios as in the experiment. We observe an excellent agreement between measurements and calculations, which validates our approach to unambiguously identify the winding number, including the full trajectory of g(k). Time-reversal-breaking topological phases – Finally, we demonstrate how the phases of the 3^rd-nearest-neighbor couplings can be tuned to break time-reversal symmetry. Adjusting these phases can induce a topological phase transition without modifying the hopping strengths. This is in sharp contrast with the conventional SSH model where it is necessary to adjust the coupling strengths to change 𝒲, because any non-trivial phase of the hopping parameters can always be gauged away. This is a consequence of the fact that these dimer chains with broken time-reversal symmetry belong to the AIII topological class rather than the BDI class to which the SSH model and its extensions with time-reversal belong. The relative phase between the different hopping terms of the Hamiltonian is experimentally realized by adding a phase to V_a,b,c,d in Eq. (<ref>). The relative phase between one of the long-range couplings and the other couplings effectively induces a synthetic gauge field for photons <cit.>. This is best seen by reformulating the dimerized chain as a ladder-like lattice (see Fig. <ref> (a)). In this equivalent picture, one clearly sees how the phase of d=|d|e^-iθ gives rise to flux plaquettes similar to those in ladder models with a staggered magnetic field. This gauge field provides an additional degree of freedom to explore a 3-dimensional topological phase space that extends the phase diagram presented in Fig. <ref> (c) to the case of complex-valued d^*/a ratios. A similar argument can of course be used for the c couplings. An example of a trajectory in this extended phase space is schematically depicted in Fig. <ref> (b), where the orange line goes through points where the magnitude of all hopping amplitudes remains constant, but the phase of d^*/a evolves from 0 to π. Figs. <ref> (c)-(e) present the measured (top) and calculated (bottom) band structures (left) and trajectories of g(k) (right) along this path. Once again, except for the small deviations observed in Figs. <ref>(d,e) near gap closures due to the finite decay rate γ, an overall excellent agreement is found between the experiment and the theory. In particular, the breaking of time-reversal induced by the gauge field leads to a clear asymmetry of the band structure (i.e. E(k)≠ E(-k)). Moreover, as the gauge field is increased, we observe a closing and re-opening of the energy gap describing a topological phase transition. This transition is clearly seen through the measurement of the trajectory of g(k) whose winding number changes from 2 to 1. Conclusions– In this work, we have exploited a synthetic dimension scheme based on an optical fiber loop to realize a generalized dimer chain model including long-range and/or time-reversal breaking hopping terms. Experimental signatures of the non-trivial band topology have been obtained by reconstructing the geometry of the Bloch wavefunctions throughout the entire Brillouin zone. The natural next step will be to relate this microscopic characterization of the topology to macroscopic observables such as a driven-dissipative version of the mean chiral displacement <cit.> and edge states in the presence of frequency-space potentials, as proposed in <cit.> and pioneered in <cit.>. Setting up these tools will be instrumental in enabling investigations of more complex Hamiltonians in synthetic dimensions, involving non-Hermiticity <cit.>, higher dimensions <cit.>, quantum states of light <cit.>, non-Markovian dynamics <cit.>, and/or optical nonlinearities <cit.>. Note added– During the writing of this letter, this recent work <cit.> demonstrating the extraction of the Zak phase of SSH lattices in the synthetic frequency dimension came to our attention. PSJ acknowledges financial support from Québec's Minstère de l'Économie, de l'Innovation et de l'Énergie. PSJ and WAC acknowledge financial support from the Fonds de Recherche–Nature et Technologies (FRQNT) and from the Natural Sciences and Engineering Research Council (NSERC). FP acknowledges financial support from FRQNT and RH from MITACS. IC acknowledges continuous collaboration with Tomoki Ozawa and Greta Villa on this topic. IC acknowledges financial support from the Provincia Autonoma di Trento and from the Q@TN initiative. 43 fxundefined [1] ifx#1 fnum [1] #1firstoftwo secondoftwo fx [1] #1firstoftwo secondoftwo noop [0]secondoftwo ref[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0] rl [1]href #1 @bib@innerbibempty [Bradlyn et al.(2017)Bradlyn, Elcoro, Cano, Vergniory, Wang, Felser, Aroyo, and Bernevig]Bradlyn2017 author author B. Bradlyn, author L. Elcoro, author J. Cano, author M. G. Vergniory, author Z. Wang, author C. Felser, author M. I. Aroyo, and author B. A. Bernevig, 10.1038/nature23268 journal journal Nature 2017 547:7663 volume 547, pages 298 (year 2017)NoStop [Haldane(1988)]Haldane1988 author author F. D. Haldane, 10.1103/PhysRevLett.61.2015 journal journal Physical Review Letters volume 61, pages 2015 (year 1988)NoStop [Chiu et al.(2016)Chiu, Teo, Schnyder, and Ryu]Chiu2016 author author C. K. Chiu, author J. C. Teo, author A. P. Schnyder, and author S. Ryu, 10.1103/RevModPhys.88.035005 journal journal Reviews of Modern Physics volume 88, pages 035005 (year 2016)NoStop [Bansil et al.(2016)Bansil, Lin, and Das]Bansil2016 author author A. Bansil, author H. Lin, and author T. Das, 10.1103/REVMODPHYS.88.021004/FIGURES/15/MEDIUM journal journal Reviews of Modern Physics volume 88, pages 021004 (year 2016)NoStop [Kapit and Mueller(2010)]Kapit2010 author author E. Kapit and author E. Mueller, 10.1103/PHYSREVLETT.105.215303/FIGURES/3/MEDIUM journal journal Physical Review Letters volume 105, pages 215303 (year 2010)NoStop [Landig et al.(2016)Landig, Hruby, Dogra, Landini, Mottl, Donner, and Esslinger]Landig2016 author author R. Landig, author L. Hruby, author N. Dogra, author M. Landini, author R. Mottl, author T. Donner, and author T. Esslinger, 10.1038/nature17409 journal journal Nature 2016 532:7600 volume 532, pages 476 (year 2016)NoStop [Jaksch et al.(1998)Jaksch, Bruder, Cirac, Gardiner, and Zoller]Jaksch1998 author author D. Jaksch, author C. Bruder, author J. I. Cirac, author C. W. Gardiner, and author P. Zoller, 10.1103/PhysRevLett.81.3108 journal journal Physical Review Letters volume 81, pages 3108 (year 1998)NoStop [Bloch et al.(2012)Bloch, Dalibard, and Nascimbène]Bloch2012 author author I. Bloch, author J. Dalibard, and author S. Nascimbène, 10.1038/nphys2259 journal journal Nature Physics 2012 8:4 volume 8, pages 267 (year 2012)NoStop [Ozawa and Price(2019)]Ozawa2019b author author T. Ozawa and author H. M. Price, 10.1038/s42254-019-0045-3 journal journal Nature Reviews Physics 2019 1:5 volume 1, pages 349 (year 2019)NoStop [Celi et al.(2014)Celi, Massignan, Ruseckas, Goldman, Spielman, Juzeliunas, and Lewenstein]Celi2014 author author A. Celi, author P. Massignan, author J. Ruseckas, author N. Goldman, author I. B. Spielman, author G. Juzeliunas, and author M. Lewenstein, 10.1103/PHYSREVLETT.112.043001/FIGURES/5/MEDIUM journal journal Physical Review Letters volume 112, pages 043001 (year 2014)NoStop [Chalopin et al.(2020)Chalopin, Satoor, Evrard, Makhalov, Dalibard, Lopes, and Nascimbene]Chalopin2020 author author T. Chalopin, author T. Satoor, author A. Evrard, author V. Makhalov, author J. Dalibard, author R. Lopes, and author S. Nascimbene, 10.1038/s41567-020-0942-5 journal journal Nature Physics 2020 16:10 volume 16, pages 1017 (year 2020)NoStop [An et al.(2021)An, Sundar, Hou, Luo, Meier, Zhang, Hazzard, and Gadway]An2021 author author F. A. An, author B. Sundar, author J. Hou, author X. W. Luo, author E. J. Meier, author C. Zhang, author K. R. Hazzard, and author B. Gadway, 10.1103/PHYSREVLETT.127.130401/FIGURES/3/MEDIUM journal journal Physical Review Letters volume 127, pages 130401 (year 2021)NoStop [Ozawa et al.(2016)Ozawa, Price, Goldman, Zilberberg, and Carusotto]Ozawa2016 author author T. Ozawa, author H. M. Price, author N. Goldman, author O. Zilberberg, and author I. Carusotto, 10.1103/PHYSREVA.93.043827/FIGURES/7/MEDIUM journal journal Physical Review A volume 93, pages 043827 (year 2016)NoStop [Dutt et al.(2019)Dutt, Minkov, Lin, Yuan, Miller, and Fan]Dutt2019 author author A. Dutt, author M. Minkov, author Q. Lin, author L. Yuan, author D. A. Miller, and author S. Fan, 10.1038/s41467-019-11117-9 journal journal Nature Communications 2019 10:1 volume 10, pages 1 (year 2019)NoStop [Luo et al.(2017)Luo, Zhou, Xu, Li, Guo, Zhang, and Zhou]Luo2017 author author X. W. Luo, author X. Zhou, author J. S. Xu, author C. F. Li, author G. C. Guo, author C. Zhang, and author Z. W. Zhou, 10.1038/ncomms16097 journal journal Nature Communications 2017 8:1 volume 8, pages 1 (year 2017)NoStop [Cardano et al.(2017)Cardano, D'Errico, Dauphin, Maffei, Piccirillo, De Lisio, De Filippis, Cataudella, Santamato, Marrucci, Lewenstein, and Massignan]Cardano2017a author author F. Cardano, author A. D'Errico, author A. Dauphin, author M. Maffei, author B. Piccirillo, author C. De Lisio, author G. De Filippis, author V. Cataudella, author E. Santamato, author L. Marrucci, author M. Lewenstein, and author P. Massignan, 10.1038/ncomms15516 journal journal Nature Communications 2017 8:1 volume 8, pages 1 (year 2017)NoStop [Lustig et al.(2019)Lustig, Weimann, Plotnik, Lumer, Bandres, Szameit, and Segev]Lustig2019 author author E. Lustig, author S. Weimann, author Y. Plotnik, author Y. Lumer, author M. A. Bandres, author A. Szameit, and author M. Segev, 10.1038/s41586-019-0943-7 journal journal Nature 2019 567:7748 volume 567, pages 356 (year 2019)NoStop [Regensburger et al.(2011)Regensburger, Bersch, Hinrichs, Onishchukov, Schreiber, Silberhorn, and Peschel]Regensburger2011 author author A. Regensburger, author C. Bersch, author B. Hinrichs, author G. Onishchukov, author A. Schreiber, author C. Silberhorn, and author U. Peschel, 10.1103/PHYSREVLETT.107.233902/FIGURES/5/MEDIUM journal journal Physical Review Letters volume 107, pages 233902 (year 2011)NoStop [Lin et al.(2018)Lin, Yuan, Xiao, and Fan]Lin2018 author author Q. Lin, author L. Yuan, author M. Xiao, and author S. Fan, 10.1364/OPTICA.5.001396 journal journal Optica, Vol. 5, Issue 11, pp. 1396-1405 volume 5, pages 1396 (year 2018)NoStop [Zilberberg et al.(2018)Zilberberg, Huang, Guglielmon, Wang, Chen, Kraus, and Rechtsman]Zilberberg2018 author author O. Zilberberg, author S. Huang, author J. Guglielmon, author M. Wang, author K. P. Chen, author Y. E. Kraus, and author M. C. Rechtsman, 10.1038/nature25011 journal journal Nature 2018 553:7686 volume 553, pages 59 (year 2018)NoStop [Weidemann et al.(2020)Weidemann, Kremer, Helbig, Hofmann, Stegmaier, Greiter, Thomale, and Szameit]Weidemann2020 author author S. Weidemann, author M. Kremer, author T. Helbig, author T. Hofmann, author A. Stegmaier, author M. Greiter, author R. Thomale, and author A. Szameit, 10.1126/SCIENCE.AAZ8727/SUPPL_FILE/AAZ8727_WEIDEMANN_SM.PDF journal journal Science volume 368, pages 311 (year 2020)NoStop [Xiao et al.(2020)Xiao, Deng, Wang, Zhu, Wang, Yi, and Xue]Xiao2020 author author L. Xiao, author T. Deng, author K. Wang, author G. Zhu, author Z. Wang, author W. Yi, and author P. Xue, 10.1038/s41567-020-0836-6 journal journal Nature Physics 2020 16:7 volume 16, pages 761 (year 2020)NoStop [Wang et al.(2021)Wang, Dutt, Wojcik, and Fan]Wang2021 author author K. Wang, author A. Dutt, author C. C. Wojcik, and author S. Fan, 10.1038/s41586-021-03848-x journal journal Nature 2021 598:7879 volume 598, pages 59 (year 2021)NoStop [Senanian et al.(2022)Senanian, Wright, Wade, Doyle, and McMahon]Senanian2022 author author A. Senanian, author L. G. Wright, author P. F. Wade, author H. K. Doyle, and author P. L. McMahon, https://arxiv.org/abs/2208.05088v1 journal journal arXiv: volume 2208.05088 (year 2022)NoStop [Dutt et al.(2020a)Dutt, Lin, Yuan, Minkov, Xiao, and Fan]Dutt2020a author author A. Dutt, author Q. Lin, author L. Yuan, author M. Minkov, author M. Xiao, and author S. Fan, 10.1126/SCIENCE.AAZ3071/SUPPL_FILE/PAPV2.PDF journal journal Science volume 367, pages 59 (year 2020a)NoStop [Su et al.(1979)Su, Schrieffer, and Heeger]Su1979 author author W. P. Su, author J. R. Schrieffer, and author A. J. Heeger, 10.1103/PhysRevLett.42.1698 journal journal Physical Review Letters volume 42, pages 1698 (year 1979)NoStop [Maffei et al.(2018)Maffei, Dauphin, Cardano, Lewenstein, and Massignan]Maffei2018 author author M. Maffei, author A. Dauphin, author F. Cardano, author M. Lewenstein, and author P. Massignan, 10.1088/1367-2630/AA9D4C journal journal New Journal of Physics volume 20, pages 013023 (year 2018)NoStop [D'Errico et al.(2020)D'Errico, Di Colandrea, Barboza, Dauphin, Lewenstein, Massignan, Marrucci, and Cardano]DErrico2020a author author A. D'Errico, author F. Di Colandrea, author R. Barboza, author A. Dauphin, author M. Lewenstein, author P. Massignan, author L. Marrucci, and author F. Cardano, 10.1103/PHYSREVRESEARCH.2.023119/FIGURES/4/MEDIUM journal journal Physical Review Research volume 2, pages 023119 (year 2020)NoStop [Vega et al.(2021)Vega, Bello, Porras, and González-Tudela]Vega2021 author author C. Vega, author M. Bello, author D. Porras, and author A. González-Tudela, 10.1103/PHYSREVA.104.053522/FIGURES/18/MEDIUM journal journal Physical Review A volume 104, pages 053522 (year 2021)NoStop [Dutt et al.(2020b)Dutt, Minkov, Williamson, and Fan]Dutt2020 author author A. Dutt, author M. Minkov, author I. A. Williamson, and author S. Fan, 10.1038/s41377-020-0334-8 journal journal Light: Science & Applications 2020 9:1 volume 9, pages 1 (year 2020b)NoStop [Sup()]SuppMat @noop title See Supplemental Material, which contains experimental details on the setup and measurement protocol, a detailed derivation of the effective Hamiltonian, fields amplitude and input-output relations. NoStop [Piccioli et al.(2022)Piccioli, Szameit, and Carusotto]Piccioli2022 author author F. S. Piccioli, author A. Szameit, and author I. Carusotto, 10.1103/PHYSREVA.105.053519/FIGURES/6/MEDIUM journal journal Physical Review A volume 105, pages 053519 (year 2022)NoStop [Dalibard et al.(2011)Dalibard, Gerbier, Juzeliunas, and Öhberg]Dalibard2011 author author J. Dalibard, author F. Gerbier, author G. Juzeliunas, and author P. Öhberg, 10.1103/REVMODPHYS.83.1523/FIGURES/9/MEDIUM journal journal Reviews of Modern Physics volume 83, pages 1523 (year 2011)NoStop [Ozawa and Carusotto(2017)]Ozawa2017 author author T. Ozawa and author I. Carusotto, 10.1103/PHYSREVLETT.118.013601/FIGURES/4/MEDIUM journal journal Physical Review Letters volume 118, pages 013601 (year 2017)NoStop [Villa(2022)]Villa2022 author author G. Villa, title Measuring The Topology Of One-Dimensional Driven-Dissipative Chiral Lattices Through The Mean Chiral Displacement , @noop Master's thesis, school Trento University (year 2022), note supervisors T. Ozawa and I. CarusottoNoStop [Dutt et al.(2022)Dutt, Yuan, Yang, Wang, Buddhiraju, Vučković, and Fan]Dutt2022 author author A. Dutt, author L. Yuan, author K. Y. Yang, author K. Wang, author S. Buddhiraju, author J. Vučković, and author S. Fan, 10.1038/s41467-022-31140-7 journal journal Nature Communications 2022 13:1 volume 13, pages 1 (year 2022)NoStop [Gong et al.(2018)Gong, Ashida, Kawabata, Takasan, Higashikawa, and Ueda]Gong2018 author author Z. Gong, author Y. Ashida, author K. Kawabata, author K. Takasan, author S. Higashikawa, and author M. Ueda, 10.1103/PHYSREVX.8.031079/FIGURES/18/MEDIUM journal journal Physical Review X volume 8, pages 031079 (year 2018)NoStop [Cheng et al.(2023)Cheng, Lustig, Wang, and Fan]Cheng2023 author author D. Cheng, author E. Lustig, author K. Wang, and author S. Fan, https://arxiv.org/abs/2303.10545v1 (year 2023)NoStop [Bello et al.(2019)Bello, Platero, Cirac, and González-Tudela]Bello2019 author author M. Bello, author G. Platero, author J. I. Cirac, and author A. González-Tudela, @noop journal journal Science Advances volume 5 (year 2019)NoStop [Kim et al.(2021)Kim, Zhang, Ferreira, Banker, Iverson, Sipahigil, Bello, González-Tudela, Mirhosseini, and Painter]Kim2021 author author E. Kim, author X. Zhang, author V. S. Ferreira, author J. Banker, author J. K. Iverson, author A. Sipahigil, author M. Bello, author A. González-Tudela, author M. Mirhosseini, and author O. Painter, 10.1103/PHYSREVX.11.011015/FIGURES/17/MEDIUM journal journal Physical Review X volume 11, pages 011015 (year 2021)NoStop [Ricottone et al.(2020)Ricottone, Rudner, and Coish]Ricottone2020 author author A. Ricottone, author M. S. Rudner, and author W. A. Coish, 10.1103/PHYSREVA.102.012215/FIGURES/4/MEDIUM journal journal Physical Review A volume 102, pages 012215 (year 2020)NoStop [Pernet et al.(2022)Pernet, St-Jean, Solnyshkov, Malpuech, Carlon Zambon, Fontaine, Real, Jamadi, Lemaître, Morassi, Le Gratiet, Baptiste, Harouri, Sagnes, Amo, Ravets, and Bloch]Pernet2022 author author N. Pernet, author P. St-Jean, author D. D. Solnyshkov, author G. Malpuech, author N. Carlon Zambon, author Q. Fontaine, author B. Real, author O. Jamadi, author A. Lemaître, author M. Morassi, author L. Le Gratiet, author T. Baptiste, author A. Harouri, author I. Sagnes, author A. Amo, author S. Ravets, and author J. Bloch, 10.1038/s41567-022-01599-8 journal journal Nature Physics 2022 18:6 volume 18, pages 678 (year 2022)NoStop [Li et al.(2023)Li, Wang, Ye, Zheng, Wang, Liu, Dutt, Yuan, and Chen]Li2023 author author G. Li, author L. Wang, author R. Ye, author Y. Zheng, author D. W. Wang, author X. J. Liu, author A. Dutt, author L. Yuan, and author X. Chen, 10.1038/s41377-023-01126-1 journal journal Light: Science & Applications 2023 12:1 volume 12, pages 1 (year 2023)NoStop § SUPPLEMENTALS: WAVEFUNCTION SPECTROSCOPY IN TOPOLOGICAL DIMER CHAINS WITH LONG-RANGE HOPPING §.§ Description of the experimental method §.§.§ Setup The experimental setup is shown schematically in figure <ref>. All components are polarization maintaining and an in-line polarizer is added inside the loop to further ensure proper alignment of the propagating electric field polarization with the optical axis of the electro-optics phase modulators (EOMs). The cavity free spectral range (FSR) is 10.03 MHz. The staggered spectrum of the cavity, necessary to emulate dimer chains, is obtained by coupling the originally degenerate clockwise and counter-clockwise modes of the loop with a 25:75 fiber coupler. The hybridization of these modes lifts their degeneracy, giving rise to symmetric and anti-symmetric supermodes split by 3.43 MHz. Each of these pairs of symmetric and anti-symmetric supermodes forms a unit cell (i.e. a dimer) in the synthetic frequency dimension. Since we are interested in Hamiltonians with chiral symmetry, i.e. where the hopping between sites on same sublattice vanishes, we must impose a coupling mechanism that changes the parity of the supermodes. This is achieved by spatially separating the CW and CCW propagating fields using two circulators and modulating each path independently with a dedicated EOM. The two EOMs are driven with the same electrical signal, but with a π phase shift that ensures that symmetric supermodes only couple to anti-symmetric ones and vice-versa. Explicitly, to emulate dimer chains with long-range hoppings of amplitudes, a, b, c and d (as explicited in Fig. 1 (a) of the main text), the voltages sent to the two EOMs are respectively: V_ccw(t)=i/2[V_ae^-iδ t + V_be^-i(Ω-δ) t + V_ce^-i(Ω+δ)t + V_de^-i(2Ω-δ) t] + c.c. V_ccw(t)=-V_cw(t) where V_a,b,c,d are the voltage amplitudes associated to couplings {a,b,c,d} respectively (see subsection below for a derivation of the exact relationships). Each amplitude can be complex, with the argument ϕ_a,b,c,d describing the phase of the corresponding hopping coefficient. In our setup, the different frequency components are f_a=ω_a/2 π = 3.43 MHz, f_b=6.60 MHz, f_c=13.46 MHz, f_d=16.63 MHz. A 1:99 input-output fiber coupler is used for realizing transmission measurements. The transmitted intensity is measured using an InGaAs photodiode with a bandwidth of 10 connected to a 2 oscilloscope. The laser is a Grade 3 Rio Orion laser with a central wavelength of 1542.27 and a linewidth of 3.1. The frequency of the laser is modulated at 40, covering a range of more than 30 around the central emission frequency, so to span several FSR. The two lithium-niobate EOMs from iXblue have a bandwidth of 150 and low insertion losses. Two circulators ensure that clockwise and counter-clockwise propagating fields are dispatched to EOM 1 and 2, respectively. A semiconductor optical amplifier from Thorlabs compensates losses from the EOMs and the other components in order to achieve a high quality factor. An optical filter with a bandwith of 100 GHz is added to the loop to suppress amplification of modes far from the input laser frequency. §.§.§ Technique To obtain the band structure for a given set of parameters a, b, c, d, ϕ_c, and ϕ_d, the laser frequency is modulated by a staircase triangular waveform of frequency 40 and amplitude 0.25, in which each step has a duration of 10. On a given step, the laser frequency is constant. The electrical signal sent to the EOMs (Eq. (<ref>)) is divided in bursts of 10 that are synchronised with the rising edge of every step (see figure <ref>). For each step, the signal collected by the photodiode is sent to the oscilloscope: the first half of this signal (i.e. the first 5) is discarded to allow the system to stabilize, and the second half is sliced in fifty Brillouin zones (one Brillouin zone has a duration 1/10.03≈100). Each horizontal slice of the band structures presented in the main text is obtained by averaging the signal from these fifty Brillouin zones at every step. This averaging process ensures that the band structures do not exhibit fringes associated to the beating modulation of the Dirac comb discussed in the main text. Furthermore, for each step, we extract the phase of g(k) at every k-point with the following protocol. Using the band structures measure above, we determine at what laser detuning a specific time bin (i.e. a specific column of the band structure corresponding to a given k) presents the maximum transmission. We take the photodiode signal measured at that specific laser detuning step and keep only the data at that time bin in each Brillouin zone. This gives a slowly oscillating signal at 3.43 (examples of these signals are provided in Fig. 2 (e) of the main text) that we can Fourier transform to extract the phase. This gives the phase of every point in the g(k) trajectories presented in the main text. §.§.§ Calibration Laser frequency characterisation In our system, the frequency of the laser is tuned by applying a time-varying voltage to the laser. One then needs to perform a careful calibration to link the applied voltage and the output frequency of the laser. The general procedure to do this is to consider the simplest possible configuration, i.e. a single loop with only one EOM and no CW-CCW fiber coupler; the spectrum in this case is a regular cavity spectrum with equidistant modes. The FSR of this simple loop is estimated by varying the driving frequency of the EOM until the measured transmission exhibit a well-defined and symmetric band structure. For such a simple 1D lattice, the band structure must follow a cosine function and a small deviation of the driving frequency from the FSR results in an effective electric field and distorted bands (associated to Bloch oscillations). Once the driving frequency of the EOM leads to such symmetric bands, we have a precise estimation of the FSR and we can link the applied voltage to the laser frequency by doing a power-law fit on the many resonance peaks in the transmission that occur during half a period of the laser's voltage modulation. This calibration is then used to determine the frequency axis of all our band structures. Quality factor The quality factor of the cavity is estimated with the ratio between the the resonance linewidth Δν and the resonance frequency ν_r. A lorentzian fit yields a FWHM of Δν=0.075. The quality factor is therefore Q=194.4/0.075=2.6×10^9 and the finesse is ℱ=133.7. EOM signals used We present below the amplitude and phase of the voltage signal sent to the CCW EOMs for every experiement present in this work. We use the formalism: V_i = |V_i|e^i ϕ_i with i={a,b,c,d}. Figure Winding |V_a| (ϕ_a) |V_b| (ϕ_b) |V_c| (ϕ_c) |V_d| (ϕ_d) |=|=||=|=|=|=| 28emFigure 2 (SSH) 𝒲=0 1.33 (0) 0.91 (0) 0.00 (0) 0.00 (0) 𝒲=1 0.86 (0) 1.37 (0) 0.00 (0) 0.00 (0) 44emFigure 3 (Extended) 𝒲=-1 0.48 (0) 0.48 (0) 1.47 (0) 0.39 (0) 𝒲=0 0.86 (0) 0.86 (0) 0.37 (0) 0.52 (π) 𝒲=1 0.95 (0) 0.95 (0) 0.41 (π) 0.23 (0) 𝒲=2 0.57 (0) 0.57 (0) 0.00 (π) 1.24 (π) 36emFigure 4 (Gauge-field) 𝒲=2 0.48 (0) 0.48 (0) 0.41 (π) 1.09 (π) Transition 0.48 (0) 0.48 (0) 0.41 (π) 1.09 (π/2) 𝒲=1 0.47 (0) 0.47 (0) 0.41 (π) 1.09 (0) §.§ Theoretical model In this section we analytically derive the effective Hamiltonian of the system and its equations of motion. §.§.§ Derivation of the dimer chains' Hamiltonian Our system consists in an optical fiber loop acting as a cavity. The modes of this cavity are discrete and equally spaced by a free-spectral range Ω, defined as Ω/2π = c/n L where n∼ 1.45 and L∼10 the group index and length of our loop, and c is the speed of light in vacuum. These modes can propagate either in the clockwise (CW) or counterclockwise (CCW) direction and have thus two degrees of freedom: their frequency ω_m = mΩ and their direction of their propagation (either CW or CCW). Along the optical fiber the electric field is confined in one dimension, and can thus be decomposed as a linear combination of CW and CCW modes: 𝐄(x,t)= ∑_mα_m e^-ik_mx e^-iω_m t + β_m e^ik_mxe^-iω_m t + h.c. where x is a periodic coordinate (x=x+L) defined along the optical fiber, α_m and β_m are respectively the amplitudes of the m-th CW and CCW modes, and k_m=mΩ/c'>0 with c'=c/n. Following the usual procedure of quantization of the field, we associated α_m and α^*_m to bosonic operators a_m and a^†_m, and similarly β_m and β^*_m to b_m and b^†_m. This quantum notation is useful to derive the equations of motions but one should keep in mind that the modes considered in our experiment are classical coherent states. The degeneracy between the CW and CCW modes is lifted by coupling them with a 25:75 optical fiber coupler acting as a beam splitter (BS). The definition of the position coordinates x along the cavity used in the derivation of this beam splitter Hamilotnian is schematically depicted in Fig. <ref> (a). The x axis is periodic, oriented along the CCW direction and follows the outer fiber (i.e. the path followed by the fiber without the CW/CCW coupler). It first crosses d_1 at the bottom input-output ports of the CW/CCW coupler, then d_2 at the EOMs' position, and finally d_3 at the position of the top input-output ports of the CW/CCW coupler before reaching the input-output port of the cavity at x=L. The BS can destroy a photon in the CW m-th mode (respectively CCW m-th mode) entering the BS at the position x=d_3 and reinjects it at the position x=d_1 in the CCW m-th mode (respectively CW m-th mode). It is the same thing for photons entering the BS at x=d_1 and exiting at x=d_3. All 4 possible processes are schematically depicted in Fig. <ref> (b) where the red and blue arrows respectively describe the CW (a_m) and CCW modes (b_m). The momentum used in the phase factor of each modes are given by k_ccw = -k_cw = +mΩ/c'. The Hamiltonian describing the effect of the beam-splitter is given by: H_BS = -∑_m[ g b_me^+imΩd_1/c' a^†_me^+imΩd_3/c' +g^* a_me^-imΩd_3/c' b^†_me^-imΩd_1/c' + g b_me^+imΩd_3/c'a^†_me^+imΩd_1/c' +g^ * a_me^-imΩd_1/c' b^†_me^-imΩd_3/c'] with g the coupling strength of the coupler. Taking into account that e^iΩ L/c'=1 by definition of free-spectral range, we can rewrite this Hamiltonian as: H_BS = -2 ∑_m[ g^* a_mb^†_me^-imΩΔ/c' + g b_ma^†_me^+imΩΔ/c'] with Δ = L-d_1-d_3 the difference between the total length of the loop L and d_1+d_3. If the CW/CCW coupler is located symmetrically to the input-output coupler (i.e. at the same distance in the +x and -x directions), Δ=0. However, this is not true in general, either because of thermal fluctuations or due to the difference in fiber lengths. We absorb the influence of Δ in the definition of the coupling coefficient: g e^imΩΔ/c' = g. Hence, we can write H_BS in a matrix form for the m^th pair of modes as: H^(m)_BS = [ mΩ -2g; -2g mΩ ] where, with no loss of generality, we have assumed for simplicity that g̃ is real and positive. The eigenstates are symmetric (+) and antisymmetric (-) linear combinations of the CCW and CW modes: |m,±⟩ = 1/√(2)( |m,CCW⟩±|m, CW⟩ ). With this choice for g, the eigenfrequencies are ω_m,± = mΩ∓δ/2, with δ=4g proportional to the coupling strength (in our case δ/2π=3.43), the lower (higher) frequency corresponding to the symmetric + (anti-symmetric -) combination of CW and CCW modes. We define similarly the associated bosonic operators c_m,±, c^†_m, ±: c_m,± = 1/√(2)[ b_m± a_m]. Each of these super-modes acts as a lattice site along the synthetic frequency dimension. In order to emulate Hamiltonians with chiral symmetry, it is necessary to couple eigenmodes of opposite parity, i.e. symmetric with anti-symmetric and vice-versa. This requires modulating the CW and CCW modes independently; if they are driven with identical, in-phase electrical signals only eigenmodes of similar parity will couple. As described in the main text, this sublattice symmetry is achieved by spatially separating CW and CCW modes with circulators and modulating them independently. The Hamiltonian describing the effect of the modulators is given by: H_EOM = H_cw + H_ccw = 2η∑_m,n V_cw(t) a^†_m a_ne^2iπ(n-m) d_2/L + 2η∑_m,n V_ccw(t) b^†_mb_ne^2iπ(n-m) d_2/L, where d_2 is the position of the modulators along the loop (in our case, d_2∼ L/2) and η is the electro-optical coupling coefficient which can be approximated (for small eigenmode spacing m-n) <cit.>: η=Ω/4 V_π, where V_π∼2 is the switching voltage amplitude of our modulators. We impose V_ccw(t) = -V_cw(t)=V(t) to minimize (maximize) coupling between states of the same (opposite) parity. This reduces the Hamiltonian to: H_EOM = 2η V(t) ∑_m,n[ (-1)^n-m b^†_m b_n - (-1)^n-m a^†_ma_n]. We now express the H_EOM with the symmetric and anti-symmetric mode operators (c_m,±): H_EOM = 2η V(t) ∑_m,n(-1)^n-m[ c^†_m,+c_n,- + c^†_m,-c_n,+]. Note that the dependence on Δ has vanished. The driving voltage V(t) sent to the EOM has a sinusoidal form as given in the main text: V(t) = i/2[V_ae^-iδ t + V_be^-i(Ω-δ) t + V_ce^-i(Ω+δ)t + V_de^-i(2Ω-δ) t] + c.c., where V_a,b,c,d are complex. Injecting this expression of V(t) in H_EOM and neglecting non-resonant terms in the rotating-wave approximation (RWA), we obtain: H_EOM = iη∑_m[ V_ac^†_m,-c_m,+ - V_bc^†_m,+c_m-1,- - V_cc^†_m+1,-c_m,+ + V_dc^†_m,+c_m-2,-] + h.c., where the operators are expressed in a rotating frame: c_m,±=c_m,±e^iω_m,±t. In particular, note how in Eq.(<ref>) one clearly sees a π phase shift between the nearest-neighbor and long-range couplings that needs to be considered when defining V(t). Using the periodicity along the frequency axis, we can rewrite this Hamiltonian in reciprocal space by defining the Bloch modes: d_k,± = ∑_m e^-ikmΩc_m,±, where k describes the crystal momentum along this synthetic dimension. In this k-space, H_EOM becomes: H_EOM = ∑_k H(k) = i η∑_k d^†_k,-d_k,+[ V_a + V_b^* e^ikΩ - V_c e^-ikΩ - V_d^* e^2ikΩ] + h.c.. This Hamiltonian can be expressed in matrix form as: H_EOM = ∑_k ψ_k^†· h(k) ·ψ_k h(k) = [ 0 g(k); g^*(k) 0 ]; ψ_k = [ d_k-; d_k+ ], where g(k)=|g(k)|e^iϕ(k)=iη[V_a + V_b^*e^ikΩ - V_ce^-ikΩ - V_d^*e^2ikΩ]. This is precisely the quantity that is plotted in the theory plots in Figs. 3 and 4 of the main text. We can realize a gauge transformation corresponding to a π/2 phase rotation to remove the factor i (this gauge transformation is taken in account in the data analysis by phase shifting our modulated signal by π/2): g(k)=η[V_a + V_b^*e^ikΩ - V_ce^-ikΩ - V_d^*e^2ikΩ]. This is the quantity plotted in the theory plots in Figs. 3 and 4 of the main text. Direct comparison with the function g(k) in the extended Hamiltonian presented in Eq. (3) of the main text, yields the following relationships between the hopping coefficients (a,b,c,d) and EOM voltages (V_a,b,c,d): a = +iη V_a b^* = +iη V_b^* c = -iη V_c d^* = -iη V_d^*. Note the π phase shift between the nearest-neighbor and long-range hopping terms, as in the main text. These are the relationships used for relating the voltage amplitudes to effective coupling coefficients, or vice-versa, as presented in all the figures of the main text. All the voltages used in this work are provided in Table 1. §.§.§ Derivation of the electromagnetic field time evolution and time-resolved transmission The evolution of the electromagnetic field confined in the optical fiber loop, when driven with a CCW input field is given by the set of Langevin equations: ȧ_m = i[H(t),a_m]-γ/2a_m ḃ_m = i[H(t),b_m]-γ/2b_m - i√(κ)s_in(t) where H(t) = H_BS+H_EOM(t), γ is the decay rate in the loop, κ is the input-output coupling strength and s_in=Fe^-iω_Lt is the input field. Going to the coupled basis, this set of differential equations becomes, in the rotating frame: ċ_m,± = i[H_EOM(t),c_m,±] - γ/2c_m,± -i √(κ/2) e^iω_m,±ts_in. Then, transferring to k-space, we have: ḋ_k,± = i[ H(k), d_k,±]- γ/2 d_k,±-i √(κ/2)∑_m e^-ikmΩe^iω_m,±ts_in. Finally, we can now go in the basis that diagonalizes H(k): u_k,± = 1/√(2)[ d_k,-± e^iϕ(k) d_k,+] with eigenenergies ω_k,± = ±|g(k)| and ϕ(k) defined as the phase of g(k). In this basis, the equations of motion become: u̇_k,± = ∓ i|g(k)|u_k,± - γ/2u_k,± - i√(κ)/2 s_in[ e^+iδ/2t± e^-iδ/2t + iϕ(k)] ∑_m e^imΩ(t-k) Integrating this last equation over time yields: u_k,±(t) = u_k,±(t=0) -i√(κ)/2∫_0^tdt' e^(± i|g(k)|+ γ/2)(t'-t) s_in(t') [ e^+iδ/2t'± e^-iδ/2t' e^ iϕ(k)]∑_m e^imΩ(t'-k) where we can assume u_k,±(t=0) = 0. Under the change of variable t”=t-t', we can write this integral as: u_k,± (t) = -i√(κ)/2[ C_1^(±)(t) ± C_2^(±)(t)] where C_1^(±)(t) = Fe^-iω_Lte^+iδ/2t∑_m[e^imΩ(t-k)∫_0^t dt”e^-γ/2t” e^i(ω_L - (mΩ +δ/2±|g(k)|))t”] C_2^(±)(t) = Fe^-iω_Lt e^-iδ/2t e^+iϕ(k)∑_m[e^imΩ(t-k)∫_0^t dt”e^-γ/2t” e^i(ω_L - (mΩ -δ/2±|g(k)|))t”] These integrals are straightforwardly solved. We can assume that the upper bound to be much larger than the cavity's lifetime, i.e. t≫γ^-1. In the experiment, this is realized by probing the field at times long enough to let the field reach a steady-state (in our case several microseconds after the start of each laser detuning step). This leads to: C_1^(±)(t) = -Fe^-iω_Lte^+iδ/2t∑_m[e^imΩ(t-k)1/i(ω_L - (mΩ +δ/2±|g(k)|)) - γ/2] C_2^(±)(t) = -Fe^-iω_Lte^-iδ/2t e^iϕ(k)∑_m[e^imΩ(t-k)1/i(ω_L - (mΩ -δ/2±|g(k)|)) - γ/2]. In this form, C_1^(±)(t) and C_2^(±)(t) exhibits clear resonances with a linewidth γ/2 every time the laser frequency reaches respectively ω_L=mΩ + δ/2 ±|g(k)| and ω_L=mΩ - δ/2 ±|g(k)|. The interpretation of these periodic resonance is that the band structure of the dimer chains repeats itself along the synthetic frequency dimension at every eigenmode of the bare cavity. Since the linewidths of the laser and of the bare modes are much smaller than Ω and δ, these expressions can be further simplified. For instance, if we consider a drive frequency resonant with a band emerging from the m^th symmetric mode (i.e. ω_L≃mΩ - δ/2), we can simplify the equation for u_k,± by neglecting completely C_1^(±) and all terms in the summation in C_2^(±) except the one associated to m. §.§.§ Input-output relations Finally, in order to determine the temporal profile of the transmitted field, we derive the field amplitude at the photodiode with an input-output relation where only the CCW modes are coupled to the output port: s_pd = s_in -i√(κ)∑_m b_m = s_in -i√(κ/2)∑_m [c_m,+e^-i(mΩ -δ/2)t + c_m,- e^-i(mΩ +δ/2)t] = s_in -i√(κ/2)∑_m,k[d_k,+ e^-i(mΩ -δ/2)t e^ikmΩ + d_k,- e^-i(mΩ +δ/2)t e^ikmΩ] = s_in -i√(κ)/2∑_m,k[(u_k,+ - u_k,-) e^-i(mΩ -δ/2)t e^ikmΩ e^-iϕ(k) + (u_k,+ + u_k,-) e^-i(mΩ +δ/2)t e^ikmΩ]. As mentioned above, we assume the laser is in the vicinity of the m^th symmetric mode so to only retain a single term in the expression for C_1^(±)(t). On top of this, we assume that the laser is resonant with the ± Bloch band, so that all contributions from the other band can be neglected. Furthermore, we consider that the laser resonantly excites a single state in this band, whose wavevector k is determined by a resonance condition between the laser frequency and the band dispersion ω_k,±, ω_L=mΩ-δ/2±|g(k)| . Putting all together, this gives the final expression for the field amplitude: u_k,±(t) = ∓ i√(κ)/γFe^-iω_Lte^-ikm̅Ωe^i(mΩ -δ/2) te^iϕ(k) Note that this approximation is only accurate when the gap ω_g = |ω_k,+ - ω_k,-| is larger than the linewidth γ. When the gap is comparable to the linewidth, deviations appear in the extracted phase of the field and a small kink in the trajectory of g(k) (as observed in Fig. 4d of the main text). The field at the photodiode is given by: s_pd^(±) = s_in∓ i√(κ)/2∑_m u_k,±e^ikmΩ[e^-i(mΩ -δ/2)t e^-iϕ(k)± e^-i(mΩ +δ/2)t]. where we consider a single k in resonance with the laser. We now inject u_k,± obtained above and obtain: s_pd^(±) = s_in - κ/2γ Fe^-iω_Lt∑_m' e^im' Ω(t-k)[ 1 ± e^-iδ te^iϕ(k)] with m'=m - m. Then, using the identity D_T(t-a) = 1/T∑_m e^i2π/Tm(t-a) where D_T(t) is a Dirac comb of period T D_T(t)=∑_n=-∞^∞δ(t-nT) , we obtain: s_pd^(±) = s_in - κ/2γ F (2π/Ω)e^-iω_Lt D_T(t-k)[ 1 ± e^-iδ t e^iϕ(k)] with T=2π/Ω. Hence the intensity measured on the photodiode for a mode |k,±⟩ is given by: I_pd^(±) = |F|^2[1- κ/γ(2π/Ω) D_T(t-k)(1 ±cos(δ t - ϕ(k)) ]+𝒪(u_k^2) This clearly demonstrates that the transmission presents a pulse train, associated to the Dirac comb with period 2π/Ω, which is modulated at frequency 2π/δ and with a phase ϕ(k). Measuring the position of the Dirac comb within each single period at every laser detuning allows to extract the band structure, whereas the phase of its slow modulation across the different periods provides direct information on the relative phase of the two sublattice components of the Bloch mode wavefunctions. These features are is used in this work for extracting experimentally the Bloch wavefunction for every k point and, then, the winding number. This is going to be discussed in the next Section. For the sake of completeness, it is important to remind that this discussion assumes that a single k is on resonance with the laser. If more than one k are resonant with the laser (e.g. ± k in systems with time-reversal symmetry), s_pd will consist of several pulse trains corresponding to the different k's. All these pulse trains will have the same period T, but their temporal position will shifted in time according to their respective values of k. Note also that the shape of each pulse is exactly a δ-function in the limit of a small cavity linewidth γ. In the general case, the finite γ leads to a broadening of the pulses in time as visible in Fig.<ref>(a,b). §.§ Experimental extraction of ϕ(k) In order to obtain ϕ(k) from the experimental data, we perform the following procedure. For the sake of concreteness, we focus on the lower Bloch band for these calculations. First, we determine from our experimental band structure (e.g. in Figs. 2 (a)-(b) of the main text), the laser detuning at which each k mode is excited. Then, for each k point we perform the following discrete Fourier transform using the intensity measured on the photodiode for the laser detuning step corresponding to the chosen k value: ℐ(k)=∑_n e^iδ t_n^(k)I_pd(t_n^(k)) with t_n^(k)=k+nT. In our experiment, δ=3.43e6. The discrete Fourier transform is realized by considering only the time bins associated to the specific k-point considered, i.e. t=k+nT. The phase ϕ(k) is extracted by taking the argument of ℐ(k): ϕ(k)=arg[ℐ(k)]. The complete shape of ϕ(k) across the Brillouin zone is obtained by repeating this procedure for every k. §.§.§ Derivation of the stationary and non-stationary transmission from input-output theory This section gives a brief derivation of the output field dynamics from input-output theory for a quadratic Hamiltonian with a time-dependent drive. In addition to the stationary (time-local) contribution, we include the non-stationary (time non-local) contributions that are present due to the interaction picture. The resulting formalism can then be applied directly to the problem of the long-range SSH model simulation presented in the main text. We begin from the input-output relation for the field r_out(t), coupled to a system with coupling strength ∝√(κ) via a system observable 𝒪(t): r_out(t)=r_in(t)-i√(κ)𝒪(t). The system observable is assumed to be related to `bare' mode operators a_α via a unitary transformation: 𝒪(t)=∑_α U_α o^*<a_α(t)>. For example, in the case of a single optical fiber loop of length L, we have α=mν, with m an integer and ν=CW(CCW) for clockwise (counterclockwise) modes with frequencies ω_m=mΩ (Ω=2π c/L is the free spectral range for a speed of light c). If we assume that 𝒪 describes the field in a narrow region of space, then we expect to couple approximately equally to N≫ 1 modes m. If we further assume coupling only to the clockwise modes, then we have U_α o^*=U_mν,o^*=1/√(N)δ_ν,CW. The bare mode operators obey a quantum Langevin equation ȧ_α(t) = i[H_S(t),a_α(t)]-γ/2a_α(t)-i√(κ)U_α or_in(t), where, for simplicity, the decay rate γ is assumed equal for all modes and where the system Hamiltonian has a time-independent piece H_0, in addition to a contribution V(t) from time-dependent driving: H_S(t)=H_0+V(t). We diagonalize H_0 via a unitary transformation: H_0=∑_βω_β^0 b_β^† b_β; b_β = ∑_α<β|α>a_α. Transforming to the interaction picture allows us to write a simplified equation of motion for b̃_β(t)=e^iω^0_β tb_β(t): ḃ̃̇_β(t) = i[Ṽ(t),b̃_β(t)]-γ/2b_β(t)-i√(κ)e^iω_β^0 t∑_α<β|α>U_α or_in(t). The drive terms are chosen so that Ṽ(t) has a time-independent piece H describing a target Hamiltonian (to be used for simulation) and the remaining time-dependent terms are rapidly oscillating (counter-rotating). These terms are neglected in a rotating-wave approximation: Ṽ(t) = e^iH_0 tV(t)e^-iH_0 t=H+counter-rot. The Hamiltonian H is finally diagonalized via modes d_γ: H = ∑_γω_γ d_γ^† d_γ; d_γ = ∑_β<γ|β>b̃_β. Within the rotating-wave approximation, these modes obey a Langevin equation: ḋ_γ(t) = -iω_γ d_γ(t)-γ/2d_γ(t)-i√(κ)𝖴_γ o(t)r_in(t). The term 𝖴_γ o(t) accounts for the unitary transformations: 𝖴_γ o(t)=∑_αβ<γ|β>e^iω^0_β t<β|α>U_α o. The equation for d_γ [Eq. (<ref>)] can be readily integrated, giving d_γ(t) = e^-(iω_γ+γ/2)td_γ(0)-i/√(κ)∫_0^t dt'χ_γ(t-t')𝖴_γ o(t')r_in(t'), with χ_γ(t)=κ e^-(iω_γ+γ/2)t. The system observable 𝒪(t) is related to <d_γ(t)> by reversing the sequence of unitary transformations: 𝒪(t)=∑_γ𝖴_γ o^*(t)<d_γ(t)>. Now, using that <d_γ(0)>=0, and inserting the result from Eq. (<ref>) into Eq. (<ref>) and inserting this into the input-output relation [Eq. (<ref>)] gives r_out(t)=r_in(t)-∫_0^t dt'χ(t,t') r_in(t'), where the susceptibility is given by χ(t,t') = ∑_γ𝖴^*_γ o(t)χ_γ(t-t')𝖴_γ o(t'). Due to the time dependence of the interaction-picture transformation, the response is nonlocal in time, leading to non-stationary behavior (persistent oscillations). We can apply the formulas given above directly to the problem of simulating the long-range SSH model via a “figure-8" optical fiber loop, where the input and output fields couple directly to only the clockwise mode. To do this, we make the following associations for the quantum numbers: α → mν, ν=CW,CCW β → mσ, γ → kσ'. The modes α label the decoupled clockwise and counterclockwise modes of frequency ω_m=mΩ. The modes β describe the even-parity (σ=+) and odd-parity (σ=-) linear combinations of CW/CCW modes: b_β→ b_m±=1/√(2)(a_m,CW± a_m,CCW). Finally, the modes γ correspond to plane-waves that diagonalize the long-range SSH model Hamiltonian H: d_γ→ d_kσ'=1/√(2N)∑_m e^-ikm(b̃_m++σ'e^iϕ(k)b̃_m-). The relevant eigenfrequencies are ω_β^0 → ω_m±=mΩ±δ/2, ω_γ → ω_k±=±|g(k)|. Combining the results above, we can read off the unitaries (noting that the CCW mode does not couple): U_α o → 1/√(N)δ_ν,CW, <β|α> → <mσ|m,CW>=1/√(2), e^iω_β^0t → e^iω_mσt=e^iΩ mt+σδ/2t, <γ|β> → <kσ'|mσ>=1/√(2N)e^-ikm(δ_σ,++σ'e^iϕ(k)δ_σ,-). With these identifications, it is straightforward to evaluate 𝖴_γ o(t) from Eq. (<ref>), giving 𝖴_γ o(t)→𝖴_kσ(t)=1/2D(k-Ω t)(e^iδ/2t+σ e^i[ϕ(k)-δ/2t]), with D(k-Ω t) = 1/N∑_m e^-i(k-Ω t)m. This function produces a Dirac comb that selects out the k,t satisfying k=Ω t (alternatively, t=k/Ω+2nπ/Ω, with n an integer). By direct substitution, we can now evaluate the susceptibility given in Eq. (<ref>). This involves two Dirac combs and a sum over k modes. After performing the sum over k, the first Dirac comb selects out k=Ω t: ∑_k D(k-Ω t)F(k) ≃ F(Ω t). Subject to this constraint (k=Ω t), the second Dirac comb gives D(k-Ω t')=D[Ω(t-t')]→2π/Ω∑_nδ(t-t'-2nπ/Ω). This second Dirac comb has the effect of coarse-graining in time under the convolution: ∫_0^t dt' D(k-Ω t')G(t')→∑_n≥ 02π/ΩG(t-2nπ/Ω)θ(t-2nπ/Ω), where θ(x) is a Heaviside function. If G(t) is slowly varying on the time scale 2π/Ω, the Riemann sum above can be approximately restored to an integral: ∫_0^t dt' D(k-Ω t')G(t')≃∫_0^t dt' G(t-t')=∫_0^t dt' G(t'). This replacement is justified here, provided δ≪Ω, |g(k)|≪Ω. Applying the manipulations described above gives the susceptibility χ(t,t') = χ_1(t-t')+χ_2(t,t'), where the time-local and time-nonlocal contributions are χ_1(t) = .κ e^-γ t/2cos(|g(k)|t)cos(δ/2t)|_k=Ω t, χ_2(t,t') = .-iκ e^-γ (t-t')/2sin(|g(k)|(t-t'))cos(δ/2(t+t')-ϕ(k))|_k=Ω t. The term χ_2 leads to persistent oscillations in the transmitted field, while the term χ_1 will lead to a constant, stationary transmission after transients have died out. The stationary (time-averaged) transmission in this regime is found from: r_out(t)=lim_T→∞1/T∫_0^T d t_0 r_out(t; t_0), where r_out(t;t_0) = r_in(t-t_0)-∫_t_0^t dt'χ(t,t')r_in(t'). This gives a time-averaged transmission: T(ω) = r_out(ω)/r_in(ω) = 1-χ_1(ω) = 1-κ/4∑_σσ'1/i[ω-(σ|g(k)|+σ'δ/2)]+γ/2. For a resonant input field, e.g. r_in(t)=e^-iω tr_0 with ω=±|g(k)|+δ/2, there will generally be a contribution to the output intensity with persistent oscillations: I(t) = |r_out(t)|^2 ≃ |r_in(t)|^2+2Re[-i√(κ)𝒪(t) r_in^*(t)] = |r_0|^2+Δ I_1+Δ I_2(t), where Δ I_1 = -2|r_0|^2Re{∫_0^t dt'e^iω(t-t')χ_1(t-t')}≃ -|r_0|^2Reχ_1(ω). In Eq. (<ref>), we assume the contribution from the system observable is small compared to the input field, √(κ)|𝒪(t)|≪ |r_in|. The approximate result on the right applies for t≫γ^-1 and we have neglected a principal-value contribution that should be negligible relative to the contribution given above for γ≪ω. This gives the intensity I(t) ≃ |r_0|^2ReT(ω)+Δ I_2(t) (t≫γ^-1, r_in(t)=r_0e^-iω t). The non-stationary term is given by Δ I_2(t) = -2|r_0|^2Re{∫_0^t dt'e^iω(t-t')χ_2(t,t')}. Neglecting transients for t>2/γ, and neglecting off-resonant corrections for γ≪δ, we find Δ I_2(t) = ∓ |r_0|^2κ/γcos(δ t-ϕ(k)-θ_±(k))/√(1+(γ/4|g(k)|)^2); θ_± = ±arctan(γ/4|g(k)|). For γ≪ |g(k)|, we can extract ϕ(k) without any adjustment from the phase of the Fourier coefficient corresponding to frequency δ. However, for γ≳ |g(k)| (e.g., near a gap closure), the phase will be shifted by θ_±(k) and the visibility of non-stationary oscillations will be suppressed. Dividing this term into Fourier components gives: Δ I_2(t) = I^*_±(δ,k)e^iδ t+I_±(δ,k)e^-iδ t, σ=±. We can thus reconstruct an approximation for the complex function g(k): g̃_±(k) = |g(k)|e^iϕ̃_±(k) where |g(k)| can be determined from the gap in the stationary transmission spectrum, and where the phase can be determined from the phase of the Fourier coefficients I_±(δ,k): ϕ̃_±(k) = arg[∓ I_±(δ,k)] =ϕ(k)±arctan(γ/4|g(k)|). See Fig. <ref> for an example of the transmission and expected (theoretical) g̃_±(k) for the gauge-field model near a topological phase transition (θ=π/2). When |g(k)|→ 0, any non-stationary contributions will likely be due to corrections to the rotating-wave approximation. These corrections will dominate whenever |Δ I_2(t)|γ/(|r_0|^2κ)≲ |a|/δ (for example, near a gap closure). * § LONG-RANGE SSH HAMILTONIAN Bill: Adding this appendix with my re-derivation of the Hamiltonian and eigenstates, so everyone can see my conventions – It may not be necessary to include in the end. H_S(t)=H_0+V(t) H_0=∑_m,ν=CW,CCW(ω_mνa_mν^† a_mν+δ/2a_mν^† a_mν̅) For an electric field E(x,t)∝∑_m e^ik_m xa_m,CW(t)+e^-ik_m xa_m,CCW(t)+h.c., with k_m=2π m/L, we model EOM modulation at x=x_0=L/2 with V(t) = ∑_m,n,νv_ν(t)(-1)^m-na_mν^† a_nν = ∑_m,n(-1)^m-nv(t)(a_m,CW^† a_n,CW-a_m,CCW^† a_n,CCW) v_CW(t)=-v_CCW(t)=v(t). New modes: b_m± = 1/√(2)(a_m,CW± a_m,CCW) H_0=∑_mσω_mσb_mσ^† b_mσ; ω_m±=mΩ±δ/2. V(t) = ∑_mn(-1)^m-nv(t)(b_m+^† b_n-+b_m-^† b_n+) Drive frequencies: ω_a = δ = ω_m,+-ω_m,-, ω_b = Ω-δ = ω_m+1,--ω_m,+, ω_c = Ω+δ =ω_m+1,+-ω_m,-, ω_d = 2Ω-δ=ω_m+2,--ω_m,+. Define drive amplitudes: v(t) = ae^-iω_a t-be^-iω_b t-ce^-iω_c t+de^-iω_d t+c.c., where a,b,c,d are complex numbers and where the “-” signs in front of the terms ∼ b,c compensate for the sign change in coupling states with m-n odd. In the interaction picture: Ṽ(t) = e^iH_0 tV(t)e^-iH_0t = ∑_mn (-1)^m-nv(t)e^i(ω_m,+-ω_n,-)tb_m+^† b_n-+h.c. Rotating-wave approximation: Ṽ(t) = H +counter-rot.. We neglect the counter-rotating terms here for |v(t)|≪δ (RWA). The surviving terms give H = ∑_m (a b_m+^† b_m-+b b_m+1,-^† b_m,++c b_m+1,+^† b_m,-+d b_m+2,-^† b_m,++h.c.) Fourier transform b_mσ=1/√(N)∑_k e^i km b_kσ then H = ∑_k ψ_k^†· h(k) ·ψ_k h(k) = [ 0 g(k); g^*(k) 0 ]; ψ_k = [ b_k+; b_k- ], where g(k)=a+e^ikb^*+e^-ikc+e^i2kd^*=|g(k)|e^iϕ(k). The Hamiltonian matrix h(k) has eigenvectors (1, ± e^-iϕ(k))^T/√(2): h(k)·1/√(2)[ 1; ± e^-iϕ(k) ]=± |g(k)|·1/√(2)[ 1; ± e^-iϕ(k) ] The matrix can be diagonalized with a unitary: U=1/√(2)[ 1 1; e^-iϕ(k) -e^-iϕ(k) ]; U^†· h(k)· U = [ |g(k)| 0; 0 -|g(k)| ], and this allows us to write the Hamiltonian as diagonal in terms of operators d_kσ: [ d_k+; d_k- ] = U^†·[ b_k+; b_k- ]=1/√(2)[ 1 e^iϕ(k); 1 -e^iϕ(k) ]·[ b_k+; b_k- ], H = ∑_kσσ|g(k)|d_kσ^† d_kσ In terms of the site operators: d_kσ = 1/√(N)∑_me^-ikm(b_m++σ e^iϕ(k)b_m-)
http://arxiv.org/abs/2307.02607v1
20230705190119
Role of Bay of Bengal Low Pressure Systems in the Formation of Mid-Tropospheric Cyclones over the Arabian Sea and Western India
[ "Pradeep Kushwaha", "Jai Sukhatme", "Ravi S. Nanjundiah" ]
physics.ao-ph
[ "physics.ao-ph" ]
MRecGen: Multimodal Appropriate Reaction Generator Siyang Song August 1, 2023 ================================================== Arabian Sea Mid-Tropospheric Cyclones (MTCs), which are responsible for extreme rainfall events in western India, often coincide with monsoon low-pressure systems (LPS) over the Bay of Bengal. However, the role of Bay of Bengal LPSs in the formation of Arabian Sea MTCs remains unclear. This study utilizes the Weather Research and Forecasting Model (WRF) to investigate the atmospheric connection between the two basins. By introducing a balanced bogus vortex over the Bay of Bengal, cyclonic systems are induced over the Arabian Sea in the majority of ensemble members, exhibiting characteristics consistent with observations. As the Bay of Bengal vortex moves westward, the middle tropospheric trough deepens, horizontal wind shear increases, the low-level Arabian Sea stable inversion layer weakens, and the middle troposphere moisture content increases over western India and the northeast Arabian Sea. Subsequently, MTC genesis occurs along the western edge of the trough within 2-4 days of model integration over the northeast Arabian Sea. Vorticity budget analysis highlights the critical role of vorticity advection and tilting during the initial 24 hours of MTC genesis, while vortex stretching becomes the dominant vorticity source during rapid intensification. To further substantiate these findings, a mechanism denial experiment is conducted using a real-world instance of coexistent Arabian Sea MTC and Bay of Bengal LPS, which was replicated in the model. In the experiment, conditions unfavorable for LPS genesis are created by cooling and drying the Bay of Bengal. The results demonstrate that when the Bay of Bengal LPS does not develop or intensify, the Arabian Sea MTC fails to form. This study presents compelling evidence for the significant influence of Bay of Bengal low-pressure systems on the formation of severe weather-inducing MTCs over the Arabian Sea and western India. § INTRODUCTION Many extreme rainfall events over western India are associated with Mid-Tropospheric Cyclones (MTCs) <cit.>. These MTCs are characterized by their maximum intensity in the middle troposphere and a relatively weak signature in the lower troposphere. Regions in western India, particularly Maharashtra and Gujarat, including densely populated cities like Mumbai, experience exceptionally heavy rain exceeding 500 mm during each monsoon season due to the influence of MTCs <cit.>. Despite their significant contribution to the seasonal rainfall in this area and their crucial role in extreme rainfall events, the precise mechanisms behind the genesis of MTCs over the Arabian Sea and western India remain poorly understood. Some preliminary insights into the prevailing conditions during MTC genesis over the Arabian Sea have been obtained from detailed observations of the July 1963 MTC during the International Indian Ocean Expedition. The U.S. Weather Bureau Research Flight Facility (RFF) aircraft, in collaboration with the Woods Hole Oceanographic Institution, extensively explored the Arabian Sea from the surface to about 14 km between June and July 1963, capturing the development of an MTC within this observational network <cit.>. Specifically, after the establishment of the monsoon circulation, a warm-core monsoon low formed on June 24 around 15^∘N, 97^∘E in the Bay of Bengal. Over the next two days, this system moved northwestwards, crossed the coast of the East Indian state of Odisha on June 26, and remained stationary for the following three days over the east coast of India. During this time, Rawin reports from Mumbai and Ahmedabad showed an enhancement of cyclonic shear between the 700-500 hPa layers. The strengthening of westerlies south of 20^∘N and the extension of easterlies down to 850 hPa north of 20^∘N were also observed. Subsequently, the genesis of MTC was observed within the shear zone on June 28 over the Konkan coast and the northeast Arabian Sea.The formation of the MTC led to the development of an east-west trough in the middle troposphere, extending from the Arabian Sea to the Bay of Bengal through peninsular India, which coincided with a significant active phase of the Indian monsoon from July 2 to 10. During the formation of the MTC, it was noted that desert air from the north and west was overriding the moist surface layer <cit.>. Ship and aircraft observations revealed the presence of extensive stratocumulus over the Arabian Sea west of 65^∘E, indicating the trapping of moisture in the lowest 2 km under a strong temperature inversion caused by the differential temperature advection of desert air over cold marine air <cit.>. A dropsonde released at 500 hPa during the RFF flight confirmed the presence of an inversion at 900 hPa. This low-level inversion over the northeast Arabian Sea disappeared as soon as the MTC intensified. Furthermore, a significant increase in moisture content within the 700 hPa to 500 hPa layer was observed during the genesis of the MTC <cit.>. Another crucial aspect noted during the genesis of the July 1963 MTC was the preexistence of a low-pressure system (LPS) over the Bay of Bengal and East India <cit.>. The intensification and westward motion of the Bay of Bengal LPS were accompanied by the formation of a shear zone and trough over the Arabian Sea, which extended up to the Bay of Bengal. This precedence and coexistence of the Bay of Bengal LPS during the MTC gensis were further confirmed in three additional cases of MTCs <cit.> — specifically, a time series of vorticity over the Arabian Sea, central India, and the Bay of Bengal showed that vorticity and rainfall over the Bay of Bengal and Central India preceded rainfall over western India. Recent studies have analyzed heavy precipitating MTCs of Indian Meteorological Department monsoon summaries and have found that 90% of them formed in the presence of a Bay of Bengal LPS <cit.>. Additionally, detailed objective tracking and classification of MTCs over the Indian region revealed that 83% of in-situ Arabian Sea MTC formation occurred with a preexisting LPS over Bay of Bengal <cit.>. Moreover, the Arabian Sea MTCs that coexisted with the Bay of Bengal cyclonic systems constituted the rainiest class (Type-2a) of synoptic systems over western India <cit.>. While efforts have been made to understand MTC genesis in the Arabian Sea from the barotropic and baroclinic instability of the mean flow <cit.>, aforementioned observational work has highlighted the significant occurrence of rainy MTCs in conjunction with preceding Bay of Bengal LPS <cit.>. However, it remains unclear whether the preexistence of cyclonic systems in the Bay of Bengal during the genesis of MTCs is merely coincidental or if there exists a dynamic linkage between the formation of systems in both basins. To address this, our study investigates the dynamical connection between MTC formation in the Arabian Sea and LPS or cyclonic systems in the Bay of Bengal using reanalysis data and numerical experiments with the Weather Research and Forecasting Model (WRF). Gaining a comprehensive understanding of the role played by Bay of Bengal LPS in the formation of Arabian sea MTCs, as well as their impact on rainfall intensity and system maintenance, is of paramount importance for enhancing prediction capabilities and deepening our understanding of extreme rainfall events associated with MTCs. Such insights hold significant potential for advancing scientific knowledge and improving the accuracy of forecasts and warnings related to these meteorological phenomena. The outline of this paper is as follows: Section 2 presents the data used, while Section 3 describes the methods and model configuration. Section 4 presents the observational conditions and evolution of meteorological fields during the genesis of an MTC over the Arabian Sea and western India, occurring with a preceding Bay of Bengal LPS. In Section 5, we discuss numerical simulations that validate the role of Bay of Bengal LPSs in the formation of Arabian Sea MTCs. Finally, Section 6 concludes our study. § DATA §.§ NCEP Final Analysis (FNL) Data WRF model is configured to ingest various atmosphere data sets to produce the initial and boundary conditions. We utilize the widely used National Centers for Environmental Prediction Final Analysis (NCEP-FNL) Global Analysis data set. FNL data are generated from the Global Data Assimilation System (GDAS), which uses observational data from the Global Telecommunications System (GTS) and other sources for assimilation. We utilized data at six hourly intervals and one-degree horizontal resolution, available at <https://rda.ucar.edu/datasets/ds083.2/>. In fact, this is one of the most commonly used data sets for research and forecasting purposes <cit.>. §.§ ERA-5 Data The main reanalysis product used in this work as a proxy for observations is the ECMWF ERA-5 fifth-generation atmospheric reanalysis data set <cit.> which is generated using 41r2 of the Integrated Forecast System (IFS) model. IFS system utilizes a four-dimensional variational data assimilation scheme and takes advantage of 137 vertical levels and a horizontal resolution of 0.28125^∘ (31 km, or TL639 triangular truncation). The data is stored at every hour of model integration. This study utilizes six-hourly winds, vorticity, divergence, temperature, and moisture fields on pressure levels from 1000-100 hPa and native horizontal resolution. Apart from a high spatial and temporal resolution, ERA-5 has several important updates to its predecessor ERA-I, which was terminated in 2019. These include ozone with satellite radiances, aircraft, and surface pressure data during the data assimilation. One of the critical changes in ERA-5 is using an all-sky approach instead of the clear-sky approach used in ERA-I, thus providing additional information about precipitation and cloud distribution. These updates and others have resulted in more consistent sea-surface temperature, and ice-sea compared to ERA-I <cit.>. §.§ Track data set For constructing composites of instances of coexisting Arabian Sea MTCs and the Bay of Bengal LPS, we use 60 Type-2a MTCs of <cit.>. Type-2a are MTCs which form in the presence of a preceding Bay of Bengal LPS. § METHODOLOGY §.§ Bogus Vortex Scheme To understand the effect of the Bay of Bengal LPS in the formation of Arabian Sea MTC, we superimposed a cyclonic vortex over the Bay of Bengal as a perturbation to the June-July climatology of NCEP Final Reanalysis. This modified data was used as initial conditions for the vortex initialization simulations. The construction of the vortex has been done using NCAR-AFWA tropical cyclone (TC) "bogussing" scheme <cit.>. This method is not computationally expensive and follows the vorticity removal and inversion method to remove any preexisting vortex and insert a prescribed vortex in nonlinear balance. This method has been widely used as a "bogussing" scheme in the fifth-generation National Center for Atmospheric Research–Pennsylvania State University (NCAR–Penn State) Mesoscale Model (MM5) system. An updated version has been implemented in WRF <cit.>. In particular, this method is widely utilized in enhancing the description and details of tropical cyclone initialization <cit.> and in correcting the location and intensity of the cyclone precursors <cit.>. The scheme runs in two primary steps. The first step aimed to find the background state by removing perturbed fields associated with the regional vorticity by solving a series of Poisson's equations. In the second step, a user-defined balanced vortex of prescribed winds and moisture profiles is placed in the specified location. Since our background state is already known as climatology, we only use the vortex addition part of the scheme. For further details of method please see Appendix A-1. §.§ Model Setup We utilize WRF Model version 4 to perform numerical experiments. Since we aim to understand mainly the large-scale interactions between MTCs and Bay of Bengal LPS, a relatively coarse horizontal grid resolution of 57 km has been used with 33 vertical hybrid sigma levels in the vertical. First, we calculate 16 years (2000-2015) of 6 hourly climatologies from 20 June to 10 July. The 2000-2015 time interval is chosen for climatology as most of the input variables are consistent in dimensions and resolutions in this time window, outside of which there are differences in the soil levels and vertical resolutions of the input data. The calculated six-hourly climatology serves as the lateral boundary conditions for the simulations. Sea surface temperature (SST) and soil moisture were kept to their climatological value. Vortices have been added to the climatology in different locations over the Bay of Bengal to evaluate the sensitivity and to reflect the differences in locations of the Bay of Bengal LPSs. Further, the FNL data specified lateral boundary conditions, and initial conditions were utilized for model initialization in dry and cold Bay of Bengal simulations. As far as physics is concerned, several combinations of physics schemes can be used in the WRF; however, not all combinations are tested and suitable for weather phenomena of different regions of the globe and for different cases. Since Arabian Sea MTCs share similar characteristics with topical systems <cit.>, we utilize well tested combination of physics for tropical systems known as the "TROPICAL SUITE" in vortex addition experiments. Specific summary of important model details is presented in Table <ref>. The simulations are initialized with a Dolph digital filter, which runs backward and forwards for 12 hours before the actual model runs to remove any initial imbalances in the initial conditions <cit.>. In the second set of experiments, the Bay of Bengal sea surface temperature was cooled by replacing the sea surface temperature over Bay of Bengal with the cold Gaussian temperature distribution (see Appendix A2 for details) and atmosphere was dried in initial conditions over Bay of Bengal and East India (see Figure-S4). After trial and error here a slightly different combination of physics schemes is used which satisfactorily reproduces the intensity, size and location of simulated July 2020 MTC compared to observation. In particular the for Cumulus Parameterization we use Kain–Fritsch Scheme, Kessler Scheme used for Micro Physics, other schemes are used from "CONUS SUITE", specifically, RRTMG for radiation, Mellor–Yamada–Janjic Scheme (MYJ) for the planetary boundary layer, Eta Similarity Scheme for surface layer, and Unified Noah Land Surface Model for land surface parameterization. §.§ Vorticity Budget To understand the intensification of Arabian Sea MTCs, we use a vorticity budget <cit.>. The inviscid vertical relative vorticity (ξ) budget equation in pressure coordinates reads, ∂ξ/∂ t=-ξ∇.V-f∇.V-V.∇ξ-βv-ω∂ξ/∂ p+ (-∂ω/∂ x∂ v/∂ p+∂ω/∂ y∂ u/∂ p)+ Res. In equation ξ represents relative vorticity, β, meridional gradient of Coriolis parameter, ω, vertical pressure velocity, u, zonal wind component, v, meridional winds, V is the total wind vector. Here, the first two terms on the RHS represent the vorticity generation (destruction) by convergence (divergence) of horizontal winds when coupled with the relative and planetary vorticity, respectively. The third term represents vorticity advection by the horizontal wind (V), the fourth term is the coupling between differential planetary vorticity and meridional wind (the so-called β-term), the fifth term represents the vertical advection of relative vorticity via upward or downward motion, and the sixth term is the tilting or twisting term which is the change in vorticity due to horizontal gradients in the vertical velocity or vice versa. Finally, the last term is a residual that arises due to the parameterized physics and numerical approximations in post-processing the data. To understand the formation and intensification of MTC, each term of the vorticity equation is calculated and compared to deduce the dominant contributions to the systems' growth. § RESULTS AND DISCUSSIONS § EVOLUTION OF DYNAMIC FIELDS DURING TYPE 2A MTCS We aim to investigate Bay of Bengal systems' role in forming Arabian MTCs, specifically those that form with a preceding and coexisting LPS over the Bay of Bengal, i.e., the so-called Type 2a MTCs <cit.>. We first discuss features of Type 2a systems and their connection to Bay of Bengal LPSs from reanalysis data. Plan views of Type 2a MTC lag composites of anomalies of total precipitable water, relative vorticity, potential vorticity (PV), geopotential height, and wind vectors are shown in Figure <ref> in rows 1, 2, 3, and 4, respectively. Where applicable, variables are vertically averaged between 800-400 hPa to ensure that composite anomalies have a coherent vertical structure and are robust through the depth of the middle troposphere. The horizontal extent and magnitude of precipitable water anomalies (row 1; Figure <ref>) increase over the northeast Arabian Sea from Day -6 to Day 0, i.e., till the day of heaviest precipitation. During this period, the wind anomaly (quivers in Figure <ref>) is easterly and slightly northeasterly over the northern Arabian Sea, where the maximum accumulation of water vapor is observed. The easterly wind anomalies are not localized to the Arabian Sea region but appear as part of a large-scale circulation; indeed, wind anomalies originate from the Bay of Bengal and reach up to the Arabian Sea, enveloping the southern peninsula of India. Sequentially, on Day -6, a weak negative height anomaly envelops the southern tip of India (row 4; Figure <ref>), accompanying a slight positive vorticity anomaly over the southwest Bay of Bengal (around 15^∘N, 85^∘E; row 2, Figure <ref>). The height anomaly over the Arabian Sea has a large-scale east-west oriented structure which is restricted to the south of 15^∘N (row 4; Figure <ref>). The Bay of Bengal height anomaly centered near 15^∘N to 85^∘E deepens from Day -6 to Day -4 with the intensification of associated relative vorticity and potential vorticity anomalies (rows 2 & 3 of Figure <ref>). Following this, an east-west shear zone forms extending from the Bay of Bengal to the Arabian Sea (rows 2 & 3; Days -6,-2; Figure <ref>). Within this evolving shear zone, over the Arabian Sea, as seen on Day -4, two significant positive PV and relative vorticity anomaly centers form — one near the west coast of India near 15^∘N and 72^∘E and another around 65^∘E and 18^∘N. With the further intensification of the system over the Bay of Bengal (Days -4 to -2), the westward-situated Arabian Sea vorticity anomaly moves eastward, and by Day -2, it merges with the anomaly over the western coast of India. Note that as the height anomaly over the Bay of Bengal deepens and moves northward (Day -6 to Day -2), the trough over the Arabian Sea intensifies and becomes more localized off the coast of Maharashtra. This is reflected in the rapid increase of vorticity and PV anomalies over the northeast Arabian Sea from Day -2 to Day 0 (row 3; Figure <ref>). Overall, the evolution of height, velocity fields, and winds suggest that anomalies over the Arabian Sea and western India are indeed linked with the Bay of Bengal system — but, the anomalies first deepen over the Bay of Bengal, and this is followed by an increase in vorticity and PV over the Arabian Sea. The intensification of the Bay of Bengal system before that of the Arabian Sea region seems reasonable as, during the primary monsoon months, the Bay of Bengal SST and precipitable water are relatively high compared to the Arabian Sea <cit.>. These conditions are favorable for the development and intensification of cyclonic systems. On the contrary, the deep convection over the Arabian Sea is usually hampered by colder SSTs due to the upwelling of subsurface cold water and further by a dry middle troposphere and a low-level inversion <cit.>. The composite Type 2a MTC circulation in Figure <ref> bears a close resemblance with the flow during rainy days over western India <cit.>; specifically, both are characterized by the westward extending height depression and associated circulation which originates over Bay and ends over the Arabian Sea. Further, the structure of geopotential height at 600 hPa on rainy days <cit.> and a composite of geopotential height anomaly of Type 2a MTCs in the fourth row of Figure <ref> show a non-circular Bay of Bengal system with a westward extending trough. This trough is remarkably similar to the steady-state Rossby response to off-equatorial heating <cit.>. Thus, the apparent atmospheric link between the Arabian Sea and the Bay of Bengal basins may be mediated by the Gill response <cit.> to off-equatorial diabatic heating over the Bay of Bengal and East India. Previous studies had noted the remote influence of Bay of Bengal heating on the western Indian circulation, wherein when heating was prescribed over the Bay of Bengal, cyclonic vorticity and precipitation were enhanced over western India and the northeast Arabian Sea, and this was argued to be due to a Rossby wave response to the heating <cit.>. More broadly, remote induction of synoptic scale systems downstream of a mature tropical cyclone has been observed in various ocean basins <cit.>, including cyclogenesis in Rossby waves radiated from a parent cyclone <cit.>. Up to now, we focused on the horizontal evolution of the system; to understand the vertical structure evolution dynamical fields during the Type 2a system formation, vertical-horizontal cross sections of composite anomalies of relative vorticity, PV, zonal winds of 60 Type-2a MTCs are shown in Figure <ref>. Before Day -4 (not shown), the relative and potential vorticity are disorganized and weak in much of the troposphere over western India. However, as the low-level PV and relative vorticity spin-up over the Bay of Bengal, the corresponding mid-level entities increase in an organized and statistically significant manner over the Arabian Sea (Figure <ref>; row 1 and row 2). At later stages (Day -1 to 0), the vorticity and PV intensify at the lower levels, too, reflecting the increasing presence of deep convection in the system <cit.>, or a transition of the MTC into a lower troposphere cyclone phase <cit.>. The zonal wind (third row; Figure <ref>) shows a significant increase of an easterly wind anomaly in the middle troposphere from Day -3 to Day -1 over western India, which extends up to 300 hPa with a maximum around 700 hPa. Thus, it appears that the anomalous easterlies in the middle troposphere are enhanced by the system in the Bay of Bengal. These enhanced easterlies help prevent dry air mixing from the north and west and decrease the strength of inversion by reducing elevated warm air advection <cit.>. In turn, these favorable conditions allow for the development of deep convection and help further moisten the free troposphere over the Arabian Sea. The increase in strength of the anomalous easterlies and their influence is more clearly evident in Figure <ref>. From Day -6 to Day 0, middle tropospheric easterlies become stronger (Figure <ref>a), and concomitantly, relative humidity increases over western India (Figure <ref>b). The expected reduction in low-level static stability or the reduction in strength of the low-level inversion is also observed (Figure <ref>c). A weaker inversion allows for an increase in the middle troposphere moisture which favors deep convection <cit.> and the further spin-up of an MTC and deepening the height anomaly as seen in Figure <ref>d. To a first approximation, we hypothesize that sufficient localized latent heating in the atmosphere above the Bay of Bengal and East India results in a off equatorial Gill-type response consisting of a westward extending zonal trough that reaches up to the Arabian Sea starting from Bay of Bengal <cit.>. Given its rotational character, this Rossby gyre is characterized by enhanced horizontal shear and cyclonic vorticity over western India. A similar response to realistic diabatic heating in the middle troposphere has been noted in recent general circulation model experiments <cit.>. Moreover, the enhanced horizontal shear associated with the Rossby gyre can be critical for the growth of the Arabian Sea system by barotropic instability <cit.>; in fact, moist barotropic instability has been identified as a possible source of energy for the formation of monsoon lows and MTCs <cit.>. In addition to the shear and vorticity, the enhanced easterlies north of the northeast Arabian Sea and western India prevent the inflow of desert air, thus weakening the climatological inversion and raising the possibility of convection that can moisten the free troposphere. The altered dynamic and thermodynamic conditions, forced by LPS-induced heating in the Bay of Bengal, in turn, favor the formation of MTCs over the northeast Arabian Sea and western India. § BOGUS LPS OVER THE BAY OF BENGAL To verify the hypotheses suggesting that the systems in the Bay of Bengal play a significant role in creating a favorable environment over the Arabian Sea, and subsequently contribute to the genesis MTCs, we now conduct series of numerical experiments utilizing the Weather Research and Forecasting (WRF) model. The details of the model, its configuration, setup, and the bogus vortex technique have been described in the Methods section. The model domain is shown in Figure <ref>. The domain is chosen to be large enough to include several important components of the monsoon, such as the Somali Jet, the monsoon trough, the heat low, Bay of Bengal, and, the Arabian Sea. In the first set of experiments, twenty-one bogus vortices (or monsoon lows) are added over different locations in the Bay of Bengal (Figure <ref>). More attention has been given to region A1 with nine bogus vortices (their coordinates are: 14N, 81E; 15N, 81E; 16N, 81E; 14N, 82E; 15N, 82E; 16N, 82E; 14N, 83E; 15N, 83E; 16N, 83E) as this is where the first signs of an LPS appear on Day -6 in the composite plan views of Type 2a formation (Figure <ref>). The rest of the 12 members are added to other locations in regions A2 to A4[ The location of other ensemble members in regions A2 to A4 has been chosen to check the sensitivity of MTC induction to different locations of lows in the Bay of Bengal. Note that the intensity and size of the vortex are similar to the IMD data and observed sizes of Bay of Bengal LPSs <cit.>, respectively. Weak and small vortices are advected by mean flow and take longer to intensify; hence these particular values were selected by trial and error.]. Specifically, the coordinates in region A2 are: 20N, 81E; 20N, 83E; 20N, 85N; 20N, 87E, region A3 are: 18N, 87E; 16N, 87N; 14N, 87E; 18N, 81E, and region A4 are 18N, 83E; 18N, 85E; 14N, 85E; 16N, 85E. Note that we use slightly reduced sizes (r_m =200 km against r_m =350 km) in A1 to overcome the difficulties of model instabilities due to the effects of the topography of Himalayan regions at higher latitudes during dynamical adjustment. Further, v_max =14 ms^-1 or 27 kt has been chosen, which is the strength of a monsoon depression defined by the IMD. The 600 hPa geopotential height after 24 hours of simulation for nine members of group A1 is shown in Figure S1. As expected, all ensemble members show lows in the locations where bogus vortices were added along with a trough extending from the Bay of Bengal up to the Arabian Sea and western India. Apart from the trough, there is no signature of MTC existence over the Arabian Sea. However, the heights at 600 hPa after 60 hours of model integration (Figure <ref>) suggest the existence of MTCs, as evident from closed height contours centered around 70^∘E and 18^∘N in all the nine ensemble members region A1. Figure S3 depicts the entire formation cycle of an MTC in a single ensemble member, where a bogus vortex is introduced at 82^∘E and 15^∘N (center of region A1). It is important to note that similar evolution is observed in other ensemble members. The time evolution clearly illustrates that as the Bay of Bengal system progresses northwards, the associated height perturbations extend westward, covering parts of western India and the Arabian Sea as a westward-extending zonal trough from 24 to 36 hours. Within the western end of the trough, which extends over portions of the Arabian Sea, a closed vortex emerges after about 48 hours of simulation and reaches maturity as a closed vortex at approximately 60 hours of simulation. Further, this zonal trough connects the induced MTC with the bogus Bay of Bengal vortex. Both systems encircle each other similar to Type 2a composites <cit.> and with specific MTC observations <cit.>. Indeed, a close inspection of the location of the trough and MTC formation site in Figure S1, S3 and Figure <ref>, respectively, suggest that MTC formation occurred in the middle tropospheric trough, which was induced by the bogus Bay of Bengal vortex. To understand the sensitivity of MTC genesis to different locations of Bay of Bengal LPSs, other ensemble members were added over A2, A3, and A4 regions shown in Figure <ref>. The 600 hPa geopotential surface after 24 hours of model integration of group A2 to A4 is shown in Figure S2. During the first 24 hours of simulation, similar to group A1, we see only a westward extending trough up to the Arabian Sea and no closed vortex or existence of MTC. However, after 96 hours of model integration, the signature of MTC becomes evident over the Arabian Sea and western India in group A2 (first row; Figure <ref>) and A4 (third row; Figure <ref>). For the members of set A3 (second row; Figure <ref>), we only observe a trough formation, which does not develop into a closed-form vortex or MTC for up to 96 hours of model integration. This aligns with the findings of <cit.>, who demonstrated that the majority of in-situ MTCs are associated with Bay of Bengal LPSs that have a relatively southern track, while majority of LPSs taking relatively northern tracks are not always connected to rainy MTCs over the Arabian Sea. These results strongly support the hypothesis that a robust dynamic link exists between Bay of Bengal LPSs with relatively southern tracks and the genesis of Type 2a MTCs over the Arabian Sea and western India. Remarkably, when a bogus vortex is introduced to the climatological conditions in favorable locations over the Bay of Bengal, an MTC is triggered over western India or the Arabian Sea within a timeframe of 60-100 hours (2.5 to 4 days), clearly highlighting the significant influence of the Bay of Bengal system. §.§ Vorticity Budget The genesis of MTCs in all the ensemble members from region A1 immediately leads to the question of the dominant terms of the vorticity budget that contribute to the growth of the cyclonic system over the Arabian Sea and western India. The composite time series of the various terms in the vorticity budget (details of the budget are in the Methods section) averaged over the MTC region (15-20^∘N, 65-72^∘E) between 500-600 hPa is shown in Figure <ref>. The observed vorticity tendency (Figure <ref>a green curve) and the sum of all terms on the right side of the vorticity budget (Equation <ref>, Figure <ref>a blue curve) match quite well, suggesting a well-balanced budget. In particular, during the first 24 hours of simulation, the positive vorticity tendency is mainly accounted for by the advection (mainly β) and tilting terms (Figure <ref>a and b). Notably, the stretching term does not contribute during the initial 24 hours. The vorticity tendency increases rapidly after 24 hours of simulation, reaching its maximum of around 70 hours. In contrast to the first 24 hours, the vortex stretching term accounts for most of the intensification from 24 to 72 hours (Figure <ref>a, red). Further, the coupling of relative vorticity with divergence accounts for a slightly larger portion of the stretching than the coupling of planetary vorticity with divergence (not shown). On the other hand, during the same period, the horizontal advection of vorticity (Figure <ref>a black) acts as a major sink and opposes vortex stretching. Though the β-term (Figure <ref>b, green) dominates during the first 24 hours and continues to act acts as a mild source of vorticity throughout the formation of MTC, it is relatively small compared to the stretching term during the rapid intensification period (24-70 hours). The vertical advection of vorticity (Figure <ref>b, red) also acts as a source of vorticity; however, it peaks after the maximum intensification, suggesting that it does not contribute significantly during the growth phase of MTC; also, as it peaks after the intensification of the middle-level maximum, this suggests a shift of MTC vorticity maximum to the lower levels <cit.> leading to the positive vertical advection down the gradient of relative vorticity in an environment of upward velocity. To get a spatial feel for the various terms, a plan view of the vorticity budget during the initial 24 hours is shown in Figure <ref>; the vorticity tendency (∂ξ/∂ t; Figure <ref>g) matches the sum of all terms on the right side of the vorticity equation (Figure <ref>h), suggesting an approximate closure of the vorticity budget. Two centers of positive vorticity tendency are observed; a strong one over the Bay of Bengal and East India — related to the intensification of Bogus LPS, and another relatively weaker maximum over the northeast Arabian Sea off the coast of Mumbai linked to the incipient MTC. Consistent with the times series in Figure <ref>, during the first 24 hours, the total positive vorticity tendency over the Arabian Sea (Figure <ref>h) mainly results from the tilting term (Figure <ref>e), the β-term (Figure <ref>f) and advection of relative vorticity (Figure <ref>c). The β-term (Figure <ref>f) shows a broad positive tendency over the Arabian Sea in regions of southerly winds in the western sector of the vortex or LPS. The stretching term (Figure <ref>a,b) remains positive over western India; however, it contributes less relative to tilting (Figure <ref>e) and advection of absolute vorticity (Figure <ref>c+f). Notably, vertical advection (Figure <ref>d) does not appear to be a major vorticity source during the first 24 hours over the Arabian Sea. Thus, during the initial phase of the genesis of MTC, the advection of absolute vorticity and tilting of horizontal vorticity vector by the meridional gradients of the vertical motion explains almost the entire geographical distribution of the vorticity tendency. The plan view of the vorticity budget during 24-48 hours of model integration is shown in Figure <ref>. The pattern of observed vorticity tendency (Figure <ref>g) and the sum of the right-hand side of the vorticity equation (Figure <ref>h) show a similar magnitude and horizontal distribution, again suggesting a sufficiently balanced budget. In contrast to the budget during the first 24 hours, the total vorticity tendency during 24-48 hours is almost entirely explained by the total stretching term (Figure <ref> a,b). However, again, note that though the total tendency follows stretching, advection acts to cancel it and reduces its effectiveness in the intensification process. The opposing nature of stretching and advection, and their almost cancellation, has also been observed in the movement of monsoon depressions <cit.>. The β-term (Figure <ref>f) acts like a source; however, its magnitude is much lesser than the stretching term. In contrast to the first 24 hours, here, the advection of relative vorticity by horizontal winds (Figure <ref>c) and the tilting term (Figure <ref>e) act to damp the vorticity tendency. The total vorticity tendency (Figure <ref>f, lines) and regions of positive absolute vorticity maximum (Figure <ref>f, colors) almost overlap, suggesting that the vorticity tendency primarily contributes to the intensification, and not much to the motion of the incipient MTCs — in accord with the observations of quasi-stationary nature of Type 2a MTCs <cit.>. To a large extent, these characteristics are consistent with the vorticity budget of July 1963 MTC, which showed stretching as a major source, followed by vertical advection and β-term, while advection acted as the main sink of vorticity <cit.>. Overall, the vorticity budget suggests that the advection of absolute vorticity and tilting initially provide a positive vorticity environment for MTC growth; however, during the rapid intensification phase, the vorticity stretching dominates the MTC intensification. Further, as was shown in Figure <ref>, the anomalous easterly winds from the Bay of Bengal converge over the Arabian Sea. These middle-level anomalous easterlies reduce desert air intrusions from the west and north, weaken the inversion layer, create favorable conditions for convection, and help moisten the middle troposphere. This important role of the easterlies is confirmed in the model runs for group A1 in Figure <ref>. In particular, Figure <ref>a and b clearly show that from -40 hours to the rapid intensification phase (i.e., before hour 0), static stability (Figure <ref>a) and strength of inversion (Figure <ref>b) continue to decrease in the lower troposphere. At the same time, the relative humidity increases (Figure <ref>c), reaching near saturation just before the maximum intensification. Similar to the observations (Figure <ref>), both simulated static stability parameter and humidity follow trends of the middle troposphere easterlies over western India (Figure <ref>d). Moreover, the increase of middle troposphere humidity also suggests the contribution of convection in the vortex stretching. Essentially Figure <ref>, <ref> and Figure <ref> suggest that the middle troposphere easterlies enhanced by the Bay of Bengal system eventually alter the thermal profile over the Arabian Sea and western India such that it destabilizes the lower troposphere, reduces dry air mixing from west and northwest and allows the moistening of the middle troposphere. This provides the fertile ground for deep convection, convergence, and vortex stretching and leads to MTC genesis. These are precisely the features observed during the formation of July 1963 MTC <cit.>. §.§ Mechanism Denial Experiment To further confirm the role of the Bay of Bengal systems in the formation of Arabian Sea MTCs, we now consider a real instance of July 2020 MTC as a test case where the Bay of Bengal LPS preceded the formation of Arabian Sea MTC (i.e., a Type 2a member of <cit.>). The time evolution of geopotential height at 600 hPa of four control ensembles initialized on 1 July 00, 06, 12, and 18 UTC 2020 are shown in Figure S5  from Row 1 to Row 4, respectively. After 72 hours of simulation (Figure-S5, Column 1), height anomalies over the Bay of Bengal deepen, and a trough appears over the Arabian Sea and western India. With the further intensification of the Bay of Bengal LPS, the Arabian Sea trough deepens from 72-120 hours. After 120 hours of simulation, an MTC develops in all the ensemble members within the middle troposphere trough around 20^∘N and 72^∘ E. This MTC intensifies and becomes an isolated vortex after about 120-130 hours of simulation. Thus, the model successfully replicates the July 2020 Type 2a MTC formation. We now consider a mechanism denial experiment, i.e., we check whether the MTC forms in the absence of the Bay of Bengal LPS. The suppression of the Bay of Bengal LPS is achieved by cooling and drying the Bay of Bengal, as shown in Figure S4. With unfavorable conditions over the Bay of Bengal and East India, the simulated geopotential height at 600 hPa of four ensemble members is shown in Figure S6. Apart from a slight deepening of height around 96-120 hours of model integration (Figure S6), none of the ensemble members shows a strong signature of either LPS in Bay of Bengal or an MTC over Arabian sea. This suggest that in the absence of Bay of Bengal LPS Arabian sea system does not form. Further for robustness the composite difference of geopotential height of control and cold-dry Bay of Bengal is shown in Figure <ref>. This clearly suggests that favorable conditions over the Bay of Bengal allow the intensification of Bay of Bengal LPSs, which in turn deepens the trough over the Arabian Sea and supports the MTC formation over western India and the northeast Arabian Sea. This indicates that with the unfavorable conditions over the Bay of Bengal, the LPS did not intensify, and consequently, the MTC did not form over the Arabian Sea either. In more detail, Figure S7 displays the time series of absolute vorticity at 600 hPa, specifically averaged over the Arabian Sea MTCs, for each ensemble member in both the control and cold-dry Bay of Bengal simulations. Notably, the vorticity in the northeast Arabian Sea consistently remained lower than that in the control run throughout the entire simulation period. Furthermore, the cold-dry Bay of Bengal experiments show a continuous decrease in absolute vorticity. This indicates unfavorable conditions over the Arabian Sea compared to the control run, despite only forcing the conditions in the Bay of Bengal unfavorable. These findings strongly suggest that the presence of LPS in the Bay of Bengal, along with its remote effects, plays a crucial role in regulating the distribution of vorticity, horizontal shear and middle troposphere moisture over the Arabian Sea. Consequently, the frequent coexistence of LPSs in the Bay of Bengal and MTCs over the Arabian Sea cannot be regarded as a mere coincidence. Instead, it implies that the presence of Bay of Bengal LPSs, its associate convective heating and remote response play a critical role in creating favorable conditions for the formation and, to some extent, the maintenance of Arabian Sea MTCs. However, it should be kept in mind that although the majority of in-situ Type-2a MTCs are influenced by the Bay of Bengal system, not all MTCs in the Arabian Sea necessarily require the involvement of the Bay of Bengal system. In fact, Type 2b MTCs (Kushwaha et al., 2023), which form during the early monsoon season, are generated by the monsoon vortex, while Type 2c MTCs (Kushwaha et al., 2023) also undergo in-situ genesis but originate from precursors in the southern Bay of Bengal. It is worth noting that Type 2c MTCs are the weakest category among all MTCs and form under unfavorable conditions over the Bay of Bengal <cit.>. This observation further emphasizes the significance of the Bay of Bengal convection and associated remote teleconnection in the majority of rainy in-situ MTCs in the Arabian Sea. § CONCLUSIONS A consistent theme observed in previous research, starting from the seminal work of <cit.> to more recent studies by <cit.> and the comprehensive objective classification Mid-Troposphere Cyclones (MTCs) by <cit.>, is the close association between rainy MTCs over western India and the presence of low-pressure systems (LPS) over the Bay of Bengal. It has been observed that, in many instances, the LPS in the Bay of Bengal precedes the formation of MTCs in the Arabian Sea. Through a systematic set of numerical experiments presented in this study, we have demonstrated that this coexistence is not coincidental; rather, the Bay of Bengal systems plays a crucial role in the genesis of Arabian Sea MTCs. We initiated the analysis by examining composite of MTC genesis events where LPS in the Bay of Bengal preceded and coexisted with the Arabian sea MTCs (referred to as Type 2a MTCs in <cit.>). Conducting a lag composite of dynamical and thermodynamic fields for Type 2a MTCs revealed that the formation and intensification of Bay of Bengal systems accompanied a westward-extending middle troposphere trough. This trough, akin to an off-equatorial heat-induced Gill-type response, extended from the Bay of Bengal to the Arabian Sea. The middle troposphere zonal trough enhanced the horizontal zonal shear over western India and established middle tropospheric easterlies over northwest India. These middle troposphere easterlies prevented the mixing of dry and hot air from the desert regions to the north and west, resulting in a depletion of the low-level inversion layer and destabilization of the lower troposphere. Consequently, the moistening of the middle troposphere occurred over the Arabian Sea and western India. Given that this region is typically unfavorable for moist convective activity climatologically, the moistening of the middle troposphere creates favorable conditions for deep convection to occur. These alterations in the dynamic and thermodynamic environment over the Arabian Sea, triggered by the presence of Bay of Bengal systems, provide a fertile ground for the genesis and growth of mid-troposphere cyclones. The apparent link between Arabian Sea MTC formation and Bay of Bengal LPSs was further explored with numerical experiments. Two sets of experiments were performed: first, with added "Bogus Vortices or LPS" over the Bay of Bengal on top of June-July climatology, and second, the effect of preventing the formation or weakening of Bay of Bengal LPSs on MTC formation was studied by cooling and drying the Bay of Bengal. Interestingly, in most of the ensemble members when bogus vortices are introduced over the specific locations of Bay of Bengal (where LPSs are most frequently seen before Type 2a MTC formation in observation), the genesis of MTC takes place over Arabian Sea in 2.5–4 days of model simulations within the induced middle troposphere trough. With the addition of a Bogus Vortex over the Bay of Bengal, a westward extending trough forms stretching from the Bay of Bengal up to the Arabian Sea; as a result, easterlies are established north of the trough axis (around 20^∘N) and westerlies to the south. This modified flow enhances the horizontal shear and background vorticity. Middle troposphere easterlies north of the trough prevent dry air intrusion from the west and north, reduce the low-level inversion, and destabilize the lower troposphere. These features in the numerical simulations agree with the reanalysis-based characterization of the role of Bay of Bengal LPS in the development of Type 2a MTCs. Furthermore, the vorticity budget of induced MTCs shows that during the first 24 hours of MTC formation, advection of absolute vorticity and tilting account for the intensification. However, during the rapid intensification phase, vortex stretching dominates. The β-term remains positive throughout; however, it contributes mainly in the early phase of intensification. Vertical advection is initially quite weak, but it does contribute to the vorticity tendency in the middle troposphere in the later stages of MTC development. A mechanism denial experiment involving the cooling and drying of the Bay of Bengal was conducted to confirm the link between Low-Pressure Systems (LPS) in the Bay of Bengal and the genesis of Mid-Troposphere Cyclones (MTCs) in the Arabian Sea. The experiment consisted of a control run where an actual Arabian Sea MTC was successfully simulated following the formation of an LPS over the Bay of Bengal. Subsequently, the Bay of Bengal was artificially cooled and dried to suppress and weaken the formation of Bay of Bengal LPSs. In the control experiment, the LPS over the Bay of Bengal intensified, leading to the development of a westward-extending trough. This resulted in the strengthening of horizontal shear and the establishment of easterly winds. Consequently, an MTC formed over the Arabian Sea as expected. However, in the experiment with the cooled and dried Bay of Bengal, LPS over the Bay of Bengal failed to intensify. As a result, the westward-extending trough did not develop, and the horizontal shear and easterly winds remained weak. Consequently, an MTC did not form over the Arabian Sea. These findings provide compelling evidence supporting the notion that the coexistence of the Bay of Bengal system during the formation of Arabian Sea MTCs is not a mere coincidence. Instead, the genesis and maintenance of the majority of Type 2a MTCs can be attributed directly to the dynamic influence of LPS over the Bay of Bengal. The mechanism denial experiment, by dampening the formation of Bay of Bengal LPSs through cooling and drying, clearly demonstrated the crucial role of these systems in the development of MTCs over the Arabian Sea. apalike
http://arxiv.org/abs/2307.02897v1
20230706100952
RefVSR++: Exploiting Reference Inputs for Reference-based Video Super-resolution
[ "Han Zou", "Masanori Suganuma", "Takayuki Okatani" ]
cs.CV
[ "cs.CV" ]
RefVSR++: Exploiting Reference Inputs for Reference-based Video Super-resolution Han Zou^1,2     Masanori Suganuma^1,2     Takayuki Okatani^1,2 ^1Graduate School of Information Sciences, Tohoku University     ^2RIKEN Center for AIP {hzou, suganuma, okatani}@vision.is.tohoku.ac.jp Received September 15, 1996; accepted March 16, 1997 ================================================================================================================================================================================================================== Smartphones equipped with a multi-camera system comprising multiple cameras with different field-of-view (FoVs) are becoming more prevalent. These camera configurations are compatible with reference-based SR and video SR, which can be executed simultaneously while recording video on the device. Thus, combining these two SR methods can improve image quality. Recently, Lee et al. have presented such a method, RefVSR. In this paper, we consider how to optimally utilize the observations obtained, including input low-resolution (LR) video and reference (Ref) video. RefVSR extends conventional video SR quite simply, aggregating the LR and Ref inputs over time in a single bi-directional stream. However, considering the content difference between LR and Ref images due to their FoVs, we can derive the maximum information from the two image sequences by aggregating them independently in the temporal direction. Then, we propose an improved method, RefVSR++, which can aggregate two features in parallel in the temporal direction, one for aggregating the fused LR and Ref inputs and the other for Ref inputs over time. Furthermore, we equip RefVSR++ with enhanced mechanisms to align image features over time, which is the key to the success of video SR. We experimentally show that RefVSR++ outperforms RefVSR by over 1dB in PSNR, achieving the new state-of-the-art. § INTRODUCTION Recent mobile phones are now equipped with multiple cameras, typically two or three, to enhance the overall photography experience. The equipped cameras have different focal lengths, allowing users to capture photos/videos with different fields of view (FoVs) simultaneously. For example, in a scenario where two cameras with different FoVs capture the same scene, the camera with a narrower FoV produces a higher-resolution image of the corresponding portion of the scene. Integrating it with the same scene image captured by the other camera with a larger FoV will allow us to increase its image resolution, at least in the region of overlap between the FoVs. This problem, known as reference-based image super-resolution (Ref-SR), has been studied recently in the community, resulting in the development of several methods <cit.>. It is formalized as predicting a high-resolution SR image of a low-resolution (LR) input of a scene, aided by a reference (Ref) image of a similar scene. Specifically, we consider a scenario where the Ref image has a narrower FoV than the LR image. Existing Ref-SR methods first find the correspondences between the LR and Ref images, followed by warping the Ref image to align with the LR image, and finally fusing them to predict higher quality SR images. When taking a video, not a single photograph, of a scene with mobile phone cameras, we can increase the image resolution in a different way by using the images contained in the video. It is to aggregate the images in the temporal axis and obtain a rich visual feature of the captured scene, from which we super-resolve each image. Note that this is doable even with a single camera. This is called video SR (VSR), which has also been studied for a while in the community. Now, with mobile phones equipped with multiple cameras, it is possible to integrate VSR with reference-based super-resolution (Ref-SR) to further enhance the quality of super-resolved images. Recently, Lee et al. <cit.> have proposed a method for this problem, reference-based video SR, named RefVSR. While RefVSR has shown promising results, we believe there is much room for further improvements. There are two key issues with reference-based video SR. The first pertains to how to make maximum use of all available observations. Specifically, given a sequence of LR images and a sequence of Ref images, conventional video SR methods can effectively handle the LR image sequence since their objective is to super-resolve them. However, this may not apply to the Ref images. An arbitrary number of Ref images at arbitrary time steps could provide valuable information to super-resolve an LR image at a particular time step. Then, it will be crucial to determine the most useful Ref image(s) and aggregate them for the LR image at each time step. The second is how to match and align images accurately. Reference-based video SR requires aligning images over time and aligning the LR and Ref images at each time step. The former is a fundamental requirement of a well-established video SR approach, which propagates image features over time to generate high-quality SR images. However, the propagation is susceptible to the accumulation of errors, which causes a misalignment of images, leading to blurry SR outputs. We point out that existing methods for image alignment are error-prone. Based on these considerations, we propose a novel method for reference-based video SR dubbed RefVSR++. It extends RefVSR, resolving the aforementioned critical issues. First, RefVSR++ is designed to maximize the use of both the Ref and LR image sequences. Specifically, RefVSR++ utilizes two streams to propagate different image features, in contrast to RefVSR which uses only one. One stream propagates the features of Ref images, referred to as the Ref feature stream, and the other propagates aggregated Ref and LR image features, referred to as the SR feature stream. Information flows unidirectionally from the Ref feature stream to the SR feature stream. Both streams propagate features in a recurrent and bi-directional manner as with RefVSR. Second, RefVSR++ addresses the issue of image alignment by utilizing a more accurate and robust method than the confidence-guided method used in RefVSR. To achieve this, we combine deformable convolution with the standard image alignment method based on optical flow estimation. Additionally, We propose a learning-free method for propagating more reliable Ref features. Specifically, it replaces low-confidence Ref features with high-confidence ones in the Ref image sequence. These component methods extend RefVSR and enable more effective feature aggregation across all input images over an extended period of time, leading to superior SR outputs. The resultant method, RefVSR++, has established the new state-of-the-art on the RealMCVSR dataset, i.e., PSNR/SSIM = 35.90/0.965 (RefVSR++) vs. 34.86/0.959 (RefVSR). § RELATED WORK §.§ Video Super-Resolution The goal of video super-resolution (VSR) is to recover a high-resolution (HR) video from a low-resolution (LR) input video. VSR has been studied for a long time. Recent methods are classified into two categories: methods based on a sliding window and those based on recurrent computation. The sliding window is widely used in CNN-based VSR methods <cit.>, which receive several consecutive frames as input, traverse them with a sliding window, and then predict an SR image of their center frame. These methods must process each frame multiple times to handle a long video. Thus, they tend to suffer from a high computational cost, and they can deal with a limited number of input frames, which makes it hard to deal with long-term dependencies. Methods based on recurrent computation <cit.> utilize reconstructed high-quality images at previous time steps or their features to generate high-quality images at the current time step. Huang et al. <cit.> propose recurrent bi-directional networks to better utilize temporal information. Chan et al.<cit.> propose using high-order grid connections and flow-guided alignment for the recurrent computation. §.§ Reference-based Super-Resolution Reference-based super-resolution (RefSR) uses additional reference image(s) to super-resolve an input low-resolution image. Previous studies have shown the effectiveness of transferring information from a high-resolution reference image to generate SR images. A critical problem is accurately aligning the Ref image with the LR image, which is important for fusing their image features in a subsequent step to generate high-quality SR images. Zheng et al. <cit.> estimate optical flows between them for their alignment. Zhang et al. <cit.> propose using patch matching <cit.>, and Yang et al. <cit.> improve it by adopting attention mechanisms for feature fusion. Wang et al. <cit.> propose an aligned attention method for better feature fusion, which preserves high-frequency features via spatial alignment operations well. Huang et al. <cit.> decouple ref-based SR task into two sub-tasks: single image SR task and texture transfer task, and train them independently. It reduces misuse and underuse of the Ref feature, which often happens in Ref feature transfer. Lee et al. <cit.> propose RefVSR, which integrates RefSR with VSR; we will discuss their method in detail below. § IMPROVED USE OF REFERENCES FOR VSR §.§ Problem Formulation and Notation Let I^LR_t(∈ℝ^H× W× C) be an input sequence of low-resolution images, and I^Ref_t(∈ℝ^H× W× C) be an input sequence of reference images, with t=1,…,T. Here, H, W, C are the size/number of height, width, and channels of the images, respectively. Note that I^LR_t is a low-resolution image of a scene with a large FoV; I^Ref is an image of the same scene with a narrow FoV, making it a higher-resolution image of a part of the scene; see Fig. <ref>. The goal is to generate a SR version I^SR_t(∈ℝ^sH× sW× C) of I^LR_t with a upscaling factor s. §.§ Revisiting RefVSR RefVSR <cit.> integrates reference-based SR and video SR to solve the above problem. For the former, RefVSR uses I^Ref_t by warping and fusing its feature f^Ref_t with the feature f^LR_t of I^LR_t <cit.>. For the latter, RefVSR propagates scene image feature h_t in a recurrent fashion with t=1,…,T <cit.>, where h_t-1 from the previous time step is aligned with the current feature f^LR_t to compensate for the motion from t-1 to t, and then fused with f^LR_t to obtain an enriched feature h_t generating a high-quality SR image I^SR_t; see Fig. <ref>(a). RefVSR adopts bidirectional recurrent propagation in the forward (i.e., t-1 to t) and backward (i.e., t+1 to t) directions, following recent studies <cit.>. The computation in each recurrent cell is as follows. It first warps the Ref image feature f^Ref_t and the propagated feature h_t-1 from the previous step into f̃^Ref_t and h̃_t-1 to align them with f^LR_t. It computes a confidence map for the alignment of f̃^Ref_t with f^LR_t. It also propagates and updates another confidence map for the alignment of h̃_t-1 with f^LR_t, and then fuses f̃^Ref_t and h̃_t-1 with f^LR_t to produce h_t. Here, it employs a confidence-guided fusion to select well-matched features from f̃^Ref_t using the above confidence map, and well-matched features from h̃_t-1 using the propagated confidence map. There exist two drawbacks to this approach. First, it does not derive all the information from the inputs. That is, we have two observations I^Ref_t and I^LR_t at each time step t. RefVSR propagates only one type feature h_t in the temporal domain, which fuses and accumulates I^Ref_t and I^LR_t; see Fig. <ref>(a). However, I^Ref_t and I^LR_t contain different information due to their FoVs. For example, since the warped Ref feature f̃^Ref_t is aligned with f^LR_t, its feature outside the overlapped FoV can be error-prone. It is ideal that we propagate two features independently for I^Ref_t and I^LR_t. Second, RefVSR propagates a confidence map along with h_t over time, which is not well-founded compared to other components. Although the motivation is to select well-matched features for feature fusion, the method updating the confidence map appears heuristic; it takes the maximum of the flow warped propagated confidence map and the confidence map of the alignment of f^Ref_t and f^LR_t. However, the estimated optical flow is suboptimal and errors accumulate during repeated spatial warping, thereby jeopardizing the accuracy of Ref feature fusion. We show an example of a long-term backward propagation of Ref-based VSR in Fig <ref>, which shows the input LR sequences and the warped propagated confidence maps. We can see that confidence maps gradually become blurry over time, which will deteriorate the quality of feature fusion; see also Sec. <ref>. §.§ Proposed Approach Based on the above considerations, we propose to independently propagate features of I^Ref_t and I^LR_t, as shown in Fig. <ref>(b). As the goal is to generate I^SR_t from the feature propagation, we choose to propagate, not their features themselves, but one feature for I^Ref_t and the fused Ref and LR feature, from which we generate I^SR_t. We also cease to propagate a confidence map. Specifically, we propagate the fused Ref and LR feature, denoted by h^SR_t since it yields I^SR_t, and also the Ref feature, denoted by h^Ref_t. Strictly, we propagate the residual of the learned feature, i.e., h^Ref_t≡ĥ^Ref_t-f^LR_t, where ĥ^Ref_t is the Ref feature. Following the RefVSR and recent VSR methods, we employ bidirectional streams for the recurrent updates of the feature. § DETAILS OF THE PROPOSED METHOD §.§ Overview Figure <ref> shows the overview of the proposed network. As also shown in Figs. <ref>(b), it has two recurrent cells that propagate and update h^Ref and h^SR in each of the forward and backward streams and a single module that fuses the two h^SR's from the two streams and generates I^SR. The latter is merely an upsampling module consisting of ResBlocks and pixel shuffle as in RefVSR. We use forward propagation for an explanation below and omit the backward propagation since we do the same for it in the opposite direction. In each recurrent cell, we first extract the feature f^LR_t and f^Ref_t from I^LR_t and I^Ref_t using two encoders, respectively. §.§ Updating Ref Features Figure <ref> shows the recurrent cell for the Ref feature stream. The cell updates h^Ref_t as follows: h^Ref_t = ĥ^Ref_t - f^LR_t, ĥ^Ref_t = F^Ref(h^Ref_t-1; f^Ref_t, f^LR_t, s_t-1,t), where s_t-1,t is the optical flow from I^LR_t-1 to I^LR_t estimated by a flow estimator. It first warps h^Ref_t-1 to compensate for the motion of the frame from t-1 to t. Following RefVSR, we estimate and use the optical flow s_t,t-1 to warp h^Ref_t-1 to align with I^LR_t. Let h̆^Ref_t-1 be the resulting feature. However, the warped feature h̆^Ref_t-1 tends to result in misalignment due to errors in the estimated flow s_t-1,t. As we want to propagate sharp textures from the Ref frames, we need more fine-grained alignment. Huang et al.<cit.> show the effectiveness of the combination of optical flow and deformable convolution (DCN) <cit.> for Ref feature alignment. Thus, adopting their approach, we employ DCN as follows: h̅^Ref_t-1 = 𝒟^Ref(h̆^Ref_t-1, o^Ref ,m^Ref), where o^Ref is offset for the estimated optical flow and m^Ref is a modulation mask, which are computed as o^Ref = C^Ref_o([f^LR_t, h̆^Ref_t-1]) + s_t-1,t , m^Ref = σ (C^Ref_m([f^LR_t, h̆^Ref_t-1])), where C^Ref_o and C^Ref_m are convolutional layers and σ is a sigmoid function. We then align f^Ref_t with f^LR_t to compensate for the FoV difference in an adaptive fashion depending on the images' contents. We employ patch matching, following RefVSR and other earlier studies <cit.>. It is proven to be better for warping Ref features than optical flow-based warping. Specifically, it computes an index map M_t with its confidence map C_t for alignment as {M_t, C_t} = Match(I^LR_t, I^Ref_t). Specifically, we first embed I^Ref_t and I^LR_t into feature maps and extract 3×3 patches with a stride 1 using a shared encoder. Then we calculate the cosine distance S_i,j between pairs of the LR feature patch i and the Ref feature patch j. The matching index and its confidence for patch i is given by M_t,i = max_j S_i,j and C_i = max_j S_i,j, respectively. Finally, we warp f^Ref_t to f̂^Ref_t using the index map M_t. We then update the propagated Ref feature h̅^Ref_t-1 with the warped Ref feature f̂^Ref_t. There are two potential issues with fusing such Ref features from different time steps. First, they may have different sharp textures at the same position due to the change of depth of field or illumination. Second, they contain different reliable regions. Thus, their direct fusion may result in a mismatch or blurred feature. To obtain a sharp fused Ref feature and preserve information from the current and previous Ref frames, we selectively transfer textures from the two Ref features h̅^Ref_t - 1 and f̂^Ref_t to the LR feature. Specifically, we fuse each of h̅^Ref_t-1 and f̂^Ref_t with the current frame's LR feature f^LR_t separately, aiming to preserve textures from the Ref features from different sources. h^Ref_t = Conv([f^LR_t, h̅^Ref_t-1], ĥ^Ref_t = Conv(C_t) · Conv(f^LR_t, f̂^Ref_t). where Conv is a convolutional layer. We adopt an adaptive fusion guided by the confidence map C_t to fuse f̂^Ref_t. We then get the Ref feature by fusing the two Ref features obtained above. ĥ^Ref_t= ℛ(ĥ^Ref_t, h^Ref_t, f^LR_t). where ℛ is ResBlocks. We use ĥ^Ref_t to refine the propagated SR feature in the SR feature cell. Finally, we obtain the residual Ref feature as in (<ref>a), which will be propagated to the next time step. §.§ Updating SR Features Figure <ref> shows the recurrent cell for the SR feature stream. The cell updates the propagated SR feature using the Ref feature ĥ^Ref_t computed above and the current LR feature f^LR_t as h^SR_t = F^SR(h^SR_t-1, ĥ^Ref_t, f^LR_t, s_t-1,t). First, we align the propagated LR feature h^SR_t-1 to the LR feature of the current time step t. Similarly to the Ref feature cell, we first warp it with the estimated flow s_t-1,t and then apply deformable convolution to cope with possible misalignment. Letting h̆^SR_t-1 be the warped feature by s_t,t-1, we apply DCN as h̅^SR_t-1 = 𝒟^SR(h̆^SR_t-1, o^SR, m^SR), with offset o^SR of flows and a modulation mask m^SR, which are given by o^SR = C^SR_o(ĥ^Ref_t, h̆^SR_t-1)+ s_t-1,t, m^SR = σ (C^SR_m(ĥ^Ref_t, h̆^SR_t-1)), where C^SR_o and C^SR_m are convolutional layers and σ is a sigmoid function. Next, we fuse the aligned feature h̆^SR_t-1 with the updated Ref feature ĥ^Ref_t from the other cell using several ResBlocks as h^SR_t = ℛ(ĥ^Ref_t, h̆^SR_t-1 ) +h̆^SR_t-1. The resulting h^SR_t will be used to generate SR output at the current time step t and also propagated to the next time step. §.§ Training We use the RealMCVSR dataset <cit.> in our experiments. It consists of ultra-wide, wide-angle, and telephoto videos from multiple scenes. The three videos have the same size but different FoVs. Note that the wide and telephoto videos have twice (2×) and four times (4×) the magnification of the ultra-wide video, respectively. Following the paper<cit.>, we consider the 4× super-resolution of an ultra-wide video using a wide-angle video as a Ref video. This yields an 8K video of the scene. We use the two-stage training strategy used in previous studies <cit.>. Specifically, we first train the proposed model in a supervised fashion. In this stage, 4× down-sampled ultra-wide and wide-angle videos are treated as LR and Ref inputs, respectively, and the original ultra-wide videos are treated as ground-truth. Subsequently, the pre-trained model is fine-tuned to adapt it to the original-sized videos. In this stage, the original ultra-wide and wide-angle videos serve as LR and Ref inputs, respectively. Due to the absence of ground-truth 8K ultra-wide videos, we use an approximate loss by using the original wide-angle and telephoto videos as pseudo ground-truths. The details of the first stage are as follows. We train the model to use the 4× downsampled versions of I^UW_t and I^Wide_t as I^LR_t and I^Ref_t, respectively, to predict as close I^SR_t to the original high-resolution I^UW_t as possible. We use the weighted sum of two loss functions, the reconstruction loss and the reference fidelity loss: ℓ_1st = ℓ_rec + βℓ_fid, where β is a weighting constant. For ℓ_rec, we use the same loss as RefVSR, which evaluates the low-frequency and high-frequency components of I^SR in two terms, respectively, as ℓ_rec = ||I^SR_t,blur-I^UW_t,blur|| + α∑_iδ_i(I^SR_t, I^UW_t), where I_t,blur indicates I_t is filtered by 3×3 Gaussian kernels with σ=0.5. The second term is called the contextual loss; δ_i(X, Y)=min_j𝔻(x_i,y_j) measures the distance between pixel x_i and the most similar pixel y_j at a perceptual distance 𝔻. For ℓ_fid, we employ the fidelity loss proposed in <cit.> that uses only a single Ref frame as ℓ_fid(I^SR_t,I^Wide_t) = ∑_iδ_i (I^SR_t,I^Wide_t)· c_i/∑_i c_i, where c_i is matching confidence. The details of the second training stage are as follows. To overcome the performance drop due to the fact that the model is trained on the down-sampled input, we follow the fine-tuning strategy proposed in <cit.>. Specifically, we use the original I^UW_t and I^Wide_t as the LR and Ref inputs, respectively. As there is no ground truth in this case, we 4× down-sample the predicted I^SR_t and measure its difference from I^Wide_t. In addition, we also use the fidelity loss between I^SR_t and the corresponding image I^Tele_t of the telephoto video. Therefore, the loss is given as follows: ℓ_2nd = ||I^SR_t↓,blur-I^UW_t,blur|| + γℓ_fid(I^SR_t,I^Tele_t), where γ is a weighting constant. §.§ Test-time Feature Replacement Huang et al. <cit.> point out an issue with reference-based SR methods: Ref images tend to be misused or underused due to suboptimal feature matching. Lower confidence with high similarity will lead to underuse, whereas high confidence with low similarity will generate wrong textures (i.e., overuse). Furthermore, inaccurate Ref textures will accumulate during the propagation, causing detrimental effects. We propose a method to cope with these problems. Figure <ref> illustrates the background. The images on the left represent the original Ref features (upper row) and their aligned features at three different time steps. Here, we use RGB images for an explanation; we applied the same image warp that applied to the Ref features to their original images. The three Ref features are aligned with the same target image and their reliable regions are shown as boxes. Thus, the region outside the boxes will have erroneous image features due to mismatch. It is hard to train the network to correct these errors in the second training stage for 8K video generation since we do have no ground truths for supervision. To cope with the difficulty, we propose directly replacing features with low confidence at a time step with features with higher confidence at other time steps. Our Ref feature stream computes the aligned Ref feature f̂^Ref_t and its confidence map C_t for alignment, given an LR image I^LR_t and a Ref image I^Ref_t, as explained in Sec. <ref>. Then we select a Ref image I^Ref_t' at different t' in a range [t-τ,t+τ] and perform patch matching/alignment with the current LR frame I^LR_t, where τ is a hyper-parameter. We denote the resulting features and confidence map by f̂^Ref_t' → t and C_t' → t. We then replace features with low confidence in f̂^Ref_t as f̂^Ref_t,ij = {[ f̂^Ref_t' → t, ij if C_t' → t,ij > C_t,ij; f̂^Ref_t, ij otherwise, ]. where ij indicates the image coordinates (i,j). An example result is shown in the right of Fig. <ref>; it is obtained by replacing features in the central frame with the other two Ref images. We can see that the generated feature map contains features that are sharper and thus will be more reliable. This method merely reuses the patch-matching module during the test time and does not require additional training. § EXPERIMENTAL RESULTS We evaluate our method using the RealMCVSR dataset <cit.>. It contains 161 triplets (i.e., ultra-wide, wide-angle, and telephoto, as explained in Sec. <ref>) of HD-resolution (1080× 1920) videos captured by iPhone 12 Pro Max equipped with triple cameras. The dataset is divided into training, validation, and testing sets, with 137, 8, and 16 triplets, respectively. We follow <cit.> for the experimental settings and thus omit the details here; see them in the supplementary. Following previous studies <cit.>, we set the loss weights α, β, and γ to 0.01, 0.05, and 0.1, respectively. §.§ Quantitative Evaluation We first quantitatively evaluate the proposed method on the RealMCVSR testset. Following <cit.>, we evaluate methods on the task of the first training stage, i.e., using 4× down-sampled ultra-wide and wide-angle videos as the LR and Ref inputs, respectively. We evaluated the models trained in the first stage. Table <ref> shows the results. We select several SR methods, i.e., RCAN <cit.>, IconVSR <cit.>, TTSR<cit.>, DCSR <cit.>, and RefVSR <cit.>. RCAN and IconVSR do not utilize a reference, while others are reference-based methods, which is indicated by `R-' in the type column. Only RefVSR and ours in the reference-based methods are video SR methods. We use two variants of our method with different channel numbers for comparison. The smaller one (with `-small') is 32, and the other is 64. The methods with `-pix' in the method column of Table <ref> indicate that they are trained with pixel-based loss alone (i.e., the first term in (<ref>) and (<ref>).) As with other image restoration/synthesis tasks, SR is affected by the perception-distortion trade-off <cit.>. Specifically, the models trained with pixel-based loss alone tend to yield better quantitative performance, whereas those trained with additional conceptual loss tend to yield better visual quality. For a fair comparison, we show two results for each of the compared methods. To be specific, RCAN, TTSR, DCSR, and RefVSR employ ℓ_1 loss, whereas IconVSR and ours use the Charbonnier loss <cit.>, for the pixel-based loss. We can see that our method outperforms all the previous methods in each category, even with fewer parameters (i.e., those with `-small'). Both types of models are trained and evaluated with 4× downsampled ultra-wide and wide-angle frames. The evaluation is done without test-time replacement method. As explained above, we use the wide-angle video as Ref inputs and ultra-wide videos as LR inputs. The wide-angle video frame shares only 50% of its FoV with the ultra-wide video frame. Previous studies of reference-based SR found that using Ref frames contributes to improving SR quality inside and outside the overlapped FoV. We compute PSNR and SSIM over the pixels belonging to different FoVs to analyze the dependency in image positions, following <cit.>. Table <ref> shows the results; 0%-50% indicates the overlapped FoV, and 50%-r% indicates the centered rectangular FoV having r% area of the frame minus the overlapped (0%-50%) FoV. Like other reference-based methods, our method also suffers from a performance drop in the non-overlapped FoV. Nevertheless, it still yields better performance than any other method. §.§ Qualitative Evaluation We use generated 8K videos for qualitative comparison. Figure <ref> shows selected examples of SR outputs generated by different methods. These are the results of 4× super-resolution from the ultra-wide video inputs in the dataset's training split. We select one method from each of the different categories in addition to ours, i.e., RCAN <cit.> (single-image reference SR), BasicVSR++<cit.> (video SR), and RefVSR<cit.> and ours (reference video SR). For RCAN and BasicVSR++, we train each using the same training setting as ours on the RealMSVSR dataset. For RefVSR, we used the pre-trained model released by the authors[ <https://github.com/codeslake/RefVSR>]. We can see that the proposed method generates the clearest textures in the overlapped FoV, including correctly reconstructed numbers and alphabets. In image regions outside the overlapped FoV, it yields the most smooth textures with fewer unnatural artifacts. §.§ Ablation Study We conduct an ablation test to identify the effectiveness of each component of our method. We experimentally evaluate several variants of the proposed network with different ablated components. Here, we use Charbonnier loss for simplicity. The results are shown in Table <ref>. The first variant is a baseline model, the first row in Table <ref>. It has only our SR feature stream, which performs the basic feature aggregation, where it employs only the optical-flow-based alignment to align the neighboring frames (i.e., t-1 and t) and omits the subsequent DCN-based feature alignment/refinement. Its performance (PSNR) is 35.19dB. The variant in the second row propagates a confidence map and the feature propagation mechanism is identical to RefVSR <cit.>, which yields inferior performance. The next variant in the third row is the same network with a single feature stream, but having the full feature alignment mechanism. It improves PSNR by 0.53dB, showing the effectiveness of the DCN-based feature alignment component. The fourth variant is the baseline plus the Ref feature stream. The added Ref feature stream is equipped with the full feature alignment mechanism, unlike the other SR stream. Owing to the fusion of the two features propagating in the two streams, it improves the baseline by 0.43dB in PSNR. This shows the effectiveness of the Ref feature stream and its fusion with the SR stream. The last variant in the fifth row is our full network. The difference from the fourth variant is that the SR feature stream is equipped with the full feature alignment mechanism. It achieves the highest PSNR of 35.90dB. Figure <ref> shows qualitative comparisons of proposed components, which agrees with the above quantitative comparisons. More ablation studies are shown in the supplementary. § SUMMARY AND CONCLUSION We have proposed a new method for reference-based video super-resolution (SR) aiming at super-resolving videos captured by a smartphone's multi-camera system. The problem is formulated as super-resolving an input low-resolution (LR) video with the help of an input reference (Ref) video. We assume that the Ref video has a narrower FoV and thus contains higher-resolution images of a part of the scene. The proposed method, RefVSR++, extends RefVSR, the existing method for the problem, in two aspects. One is to add a parallel stream for aggregating Ref image features over time in addition to the standard stream aggregating the LR and Ref features over time. This mechanism can derive richer information from the two inputs. The outputs of the two streams are integrated to produce the super-resolution image of the LR input at each time step. The other is an improved mechanism for aligning the LR and Ref features across the neighboring time steps. To further improve the accuracy of feature alignment between LR and Ref images, we propose a novel method that selects better-matched local Ref features according to the alignment confidence and use them to yield better-aligned Ref feature with the LR image. This feature replacement method is designed for 8K SR generation, for which the ground truths for supervision are unavailable at training time. We showed through experiments the effectiveness of RefVSR++. ieee_fullname 10=-1pt barnes2009patchmatch Connelly Barnes, Eli Shechtman, Adam Finkelstein, and Dan B Goldman. Patchmatch: A randomized correspondence algorithm for structural image editing. ACM Trans. Graph., 28(3):24, 2009. blau2018perception Yochai Blau and Tomer Michaeli. The perception-distortion tradeoff. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6228–6237, 2018. chan2021basicvsr Kelvin CK Chan, Xintao Wang, Ke Yu, Chao Dong, and Chen Change Loy. Basicvsr: The search for essential components in video super-resolution and beyond. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2021. chan2022basicvsr++ Kelvin CK Chan, Shangchen Zhou, Xiangyu Xu, and Chen Change Loy. Basicvsr++: Improving video super-resolution with enhanced propagation and alignment. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5972–5981, 2022. charbonnier1994two Pierre Charbonnier, Laure Blanc-Feraud, Gilles Aubert, and Michel Barlaud. Two deterministic half-quadratic regularization algorithms for computed imaging. In Proceedings of International Conference on Image Processing, pages 168–172, 1994. haris2019recurrent Muhammad Haris, Gregory Shakhnarovich, and Norimichi Ukita. Recurrent back-projection network for video super-resolution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3897–3906, 2019. huang2015bidirectional Yan Huang, Wei Wang, and Liang Wang. Bidirectional recurrent convolutional networks for multi-frame super-resolution. Advances in Neural Information Processing Systems, 28, 2015. huang2017video Yan Huang, Wei Wang, and Liang Wang. Video super-resolution via bidirectional recurrent convolutional networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 40(4):1015–1028, 2017. huang2022task Yixuan Huang, Xiaoyun Zhang, Yu Fu, Siheng Chen, Ya Zhang, Yan-Feng Wang, and Dazhi He. Task decoupled framework for reference-based super-resolution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5931–5940, 2022. isobe2020video Takashi Isobe, Xu Jia, Shuhang Gu, Songjiang Li, Shengjin Wang, and Qi Tian. Video super-resolution with recurrent structure-detail network. In Proceedings of European Conference on Computer Vision, pages 645–660, 2020. jo2018deep Younghyun Jo, Seoung Wug Oh, Jaeyeon Kang, and Seon Joo Kim. Deep video super-resolution network using dynamic upsampling filters without explicit motion compensation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3224–3232, 2018. lee2022reference Junyong Lee, Myeonghee Lee, Sunghyun Cho, and Seungyong Lee. Reference-based video super-resolution using multi-camera video triplets. arXiv preprint arXiv:2203.14537, 2022. li2020mucan Wenbo Li, Xin Tao, Taian Guo, Lu Qi, Jiangbo Lu, and Jiaya Jia. Mucan: Multi-correspondence aggregation network for video super-resolution. In European Conference on Computer Vision, pages 335–351, 2020. sajjadi2018frame Mehdi SM Sajjadi, Raviteja Vemulapalli, and Matthew Brown. Frame-recurrent video super-resolution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6626–6634, 2018. shi2016real Wenzhe Shi, Jose Caballero, Ferenc Huszár, Johannes Totz, Andrew P Aitken, Rob Bishop, Daniel Rueckert, and Zehan Wang. Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1874–1883, 2016. tian2020tdan Yapeng Tian, Yulun Zhang, Yun Fu, and Chenliang Xu. Tdan: Temporally-deformable alignment network for video super-resolution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3360–3369, 2020. wang2021dual Tengfei Wang, Jiaxin Xie, Wenxiu Sun, Qiong Yan, and Qifeng Chen. Dual-camera super-resolution with aligned attention modules. In Proceedings of the IEEE International Conference on Computer Vision, pages 2001–2010, 2021. wang2019edvr Xintao Wang, Kelvin CK Chan, Ke Yu, Chao Dong, and Chen Change Loy. Edvr: Video restoration with enhanced deformable convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2019. xie2020feature Yanchun Xie, Jimin Xiao, Mingjie Sun, Chao Yao, and Kaizhu Huang. Feature representation matters: End-to-end learning for reference-based image super-resolution. In Proceedings of European Conference on Computer Vision, pages 230–245, 2020. yang2020learning Fuzhi Yang, Huan Yang, Jianlong Fu, Hongtao Lu, and Baining Guo. Learning texture transformer network for image super-resolution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5791–5800, 2020. zhang2018rcan Yulun Zhang, Kunpeng Li, Kai Li, Lichen Wang, Bineng Zhong, and Yun Fu. Image super-resolution using very deep residual channel attention networks. In Proceedings of European Conference on Computer Vision, 2018. zhang2019image Zhifei Zhang, Zhaowen Wang, Zhe Lin, and Hairong Qi. Image super-resolution by neural texture transfer. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 7982–7991, 2019. zheng2017learning Haitian Zheng, Mengqi Ji, Lei Han, Ziwei Xu, Haoqian Wang, Yebin Liu, and Lu Fang. Learning cross-scale correspondence and patch-based synthesis for reference-based super-resolution. In Proceedings of British Machine Vision Conference, 2017. zheng2018crossnet Haitian Zheng, Mengqi Ji, Haoqian Wang, Yebin Liu, and Lu Fang. Crossnet: An end-to-end reference-based super resolution network using cross-scale warping. In Proceedings of the European Conference on Computer Vision, pages 88–104, 2018. zhu2019residual Xiaobin Zhu, Zhuangzi Li, Xiao-Yu Zhang, Changsheng Li, Yaqi Liu, and Ziyu Xue. Residual invertible spatio-temporal network for video super-resolution. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 5981–5988, 2019.
http://arxiv.org/abs/2307.02138v1
20230705092825
Prompting Diffusion Representations for Cross-Domain Semantic Segmentation
[ "Rui Gong", "Martin Danelljan", "Han Sun", "Julio Delgado Mangas", "Luc Van Gool" ]
cs.CV
[ "cs.CV" ]
Extending the Dixon and Coles model: an application to women's football data Rouven Michelscorresponding author: r.michels@uni-bielefeld.de Bielefeld University, Marius Ötting[2], Dimitris KarlisAthens University of Economics and Business ====================================================================================================================================================================== While originally designed for image generation, diffusion models have recently shown to provide excellent pretrained feature representations for semantic segmentation. Intrigued by this result, we set out to explore how well diffusion-pretrained representations generalize to new domains, a crucial ability for any representation. We find that diffusion-pretraining achieves extraordinary domain generalization results for semantic segmentation, outperforming both supervised and self-supervised backbone networks. Motivated by this, we investigate how to utilize the model's unique ability of taking an input prompt, in order to further enhance its cross-domain performance. We introduce a scene prompt and a prompt randomization strategy to help further disentangle the domain-invariant information when training the segmentation head. Moreover, we propose a simple but highly effective approach for test-time domain adaptation, based on learning a scene prompt on the target domain in an unsupervised manner. Extensive experiments conducted on four synthetic-to-real and clear-to-adverse weather benchmarks demonstrate the effectiveness of our approaches. Without resorting to any complex techniques, such as image translation, augmentation, or rare-class sampling, we set a new state-of-the-art on all benchmarks. Our implementation will be publicly available at <https://github.com/ETHRuiGong/PTDiffSeg>. § INTRODUCTION Deep neural networks for semantic segmentation have achieved remarkable performance when trained and tested on the data from the same distribution <cit.>. However, their ability to generalize to new and diverse data remains limited <cit.>. Deep semantic segmentation models are sensitive to domain shifts, which occurs when the distribution of the testing (target) data differs from that of the training (source) data. This often leads to drastic performance degradation. To enhance the generalization ability of deep models to unseen scenarios, domain generalization (DG) methods employ specialized training strategies that improve the model's robustness. Additionally, as an alternative to DG, test-time domain adaptation (TTDA) aims to adapt a model trained on the source domain by only utilizing unlabelled target domain data. Diffusion models have recently achieved extraordinary results for image generation and synthesis tasks <cit.>. At the heart of the diffusion model lies the idea of training a denoising autoencoder to learn the reverse of a Markovian diffusion process. Trained on large-scale paired image-text datasets like LAION5B <cit.>, diffusion models, such as Stable-Diffusion <cit.>, have demonstrated remarkable performance on image synthesis controlled by natural language. The ability of large-scale text-to-image diffusion models to produce visually stunning images with intricate details, varied content, and coherent structures, while retaining the ability to modify and compose semantics, is a remarkable breakthrough. It implies that the diffusion models implicitly learn both high-level and low-level visual representations from the vast collections of image-text pairs dataset. Recently, frozen diffusion models have therefore been shown to provide excellent feature representations for semantic segmentation <cit.>, providing an alternative to standard supervised ImageNet <cit.> or self-supervised pretraining <cit.>. In light of the success of diffusion models for supervised semantic segmentation, we are led to contemplate: How well do diffusion-pretrained semantic segmentation models generalize to unseen domains? In this paper, we first investigate this question by comparing the generalization performance of diffusion-pretraining with other popular backbones and pretraining approaches. We find that the vanilla diffusion models show exceptional generalization ability, surpassing that of all other backbones. We attribute this to the natural disentanglement of concepts that occur in diffusion models. As illustrated in Fig. <ref>(b), these models can generate images of the same content, such as a car, under a variety of different styles and environments, e.g. real, cartoon, and night-time. Due to this disentangled representation, the segmentation head learns more domain-invariant relations between the underlying features and the scene semantics, such as `car'. When given an image from a different domain, the diffusion-based segmentation model (Fig. <ref>(b), right) is therefore able to more robustly identify and segmenting the object, compared to utilizing a standard backbone with a more entangled representation (Fig. <ref>(a)). These observations motivate us to explore the use of diffusion representations for DG and TTDA. One key feature that distinguishes diffusion models from other backbones for semantic segmentation is their unique ability to manipulate the backbone using prompt conditioning. This unique feature grants us direct control over domains, enabling us to generalize and adapt to new domains directly with parameter-efficient prompts. In this work, we aim at designing simple yet effective methods for boosting DG and TTDA performance, without resorting to intricate techniques such as image translation, augmentation, or rare class sampling <cit.>. To this end, we explore how to utilize the prompt in order to achieve even better generalization, or to adapt to new domains. Domain Generalization: To improve the domain generalization ability of diffusion pretraining semantic segmentation models, we introduce category prompts and scene prompts as conditioning inputs to distinguish domain-invariant features from domain-variant ones. In addition, we propose a prompt randomization strategy to further improve the extraction and disentanglement of domain-invariant representations. This strategy ensures prediction consistency on the same image under different scene prompts, thereby enhancing the robustness of the model to domain shifts. Test-Time Domain Adaptation: In order to facilitate adaptation of diffusion pretraining semantic segmentation models to the target domain during test time, we propose utilizing the scene prompt as the modulation parameter, which can be optimized via a loss function based on pseudo-labels during inference. The prompt tuning opens a new avenue for TTDA, which is parameter-efficient and mitigates the risk of overfitting. To summarize, our contributions are four-fold: * We conduct the first analysis of the generalization performance of diffusion pretrained models for semantic segmentation, demonstrating its superior performance. * We introduce prompt-based methods, namely scene prompt and prompt randomization, to further improve the model's domain generalization capability. * We propose a prompt tuning method to perform test-time domain adaptation of the model. * Extensive experiments on four benchmarks demonstrate the effectiveness of our proposed approach. Notably, our DG and TTDA methods achieve 61.2% and 62.0% on Cityscapes → ACDC, even surpassing the SOTA unsupervised domain adaptation (UDA) DAFormer <cit.> by over 5.8 points. § RELATED WORK Domain Generalization. Previous approaches for domain generalization can be categorized into two main strategies: 1) image augmentation and 2) feature normalization and whitening. The first strategy involves randomly stylizing or augmenting images from the source domain, a technique known as domain randomization <cit.>, to learn domain-invariant representations. The second strategy focuses on normalizing and whitening the features <cit.> to ensure robustness across different domains. In contrast to these previous methods, our approach differs by not relying on stylized or translated images or perturbed features. Instead, we solely regulate the behavior of the model backbone through the use of prompts. This new approach allows us to achieve domain generalization without the need for extensive image transformations or feature manipulations. Test-Time Domain Adaptation. Previous TTDA methods, also known as source-free domain adaptation <cit.>, often focus on tuning the parameters of batch normalization layers, which are parameter-efficient. However, this approach has limitations in terms of adaptability and compatibility with network architectures other than convolutional neural networks, such as transformers <cit.>. Alternatively, some methods optimize the entire model or its main components, such as the feature backbone <cit.>. However, such approaches tend to be parameter-heavy, making them prone to catastrophic overfitting to the noisy unsupervised learning objective, especially when the quantity of target domain data is limited. In contrast, our prompt-based method not only offers greater parameter efficiency compared to tuning batch normalization layers, but it also effectively modulates the behavior of the model's backbone. § METHOD §.§ Preliminaries Problem Statement. Test-time domain adaptation (TTDA): The objective of TTDA is to adapt a model f_θ^s, with pre-trained with parameters θ^s, on a labeled source domain dataset {^s, ^s} in order to improve the performance on the unlabeled target domain {^t}. The adaptation θ^s→θ^t is performed post-training, without access to the source domain data. Domain generalization (DG): DG aims to generalize the model f_θ^s, trained on the labeled source domain data {^s, ^s}, to the unseen target domain {^t}, but without updating the model parameters θ^s. Diffusion Models. Diffusion models learn the reverse diffusion process to effectively convert a pure noise into a sample of the learned distribution. The forward diffusion process <cit.> gradually adds Gaussian noise to the input data _0 until it follows a simple Gaussian prior distribution, _p = √(α̅_p)_0 + √(1-α̅_p)ϵ , ϵ∼(0, ) , α̅_p = ∏_q=0^pα_q . Here, _p represents the latent feature variable at the p-th timestep and {α_p} are fixed coefficients that dictate the noise schedule. Then, diffusion models employ a network ϵ_θ that reverses the forward process by training it to estimate the noise ϵ which has been added to _p in Eq. (<ref>). This is done by minimizing a loss of the form, 𝔼_p∼[1, P]||ϵ-ϵ_θ(_p, p; )||^2 , where represents an additional conditioning input to the network. In <cit.>, the conditioning input consists of M tokens, derived from a text or an image prompt. §.§ Diffusion-Pretraining for Semantic Segmentation A diffusion model trained for large-scale high-resolution image synthesis needs to learn both low-level (object color and texture) and high-level knowledge (object interactions and scene layout). Furthermore, text-guided diffusion models need to capture and relate the semantics conveyed in both the prompt and generated image. These observations led to the recent exploration of frozen diffusion models as the underlying pre-trained representation for semantic segmentation tasks <cit.>. The aforementioned work aims to extract and transfer the semantic relevant knowledge from the large-scale text-to-image synthesis pretraining to the downstream semantic segmentation task. The basic idea is to 1) utilize the pretrained diffusion model as the backbone network, 2) extract the visual internal representations {_i(ϵ_θ, ^s)}, and cross-attention maps {_i(_i, )} between the conditioning input and the internal visual representations, and 3) feed the extracted {_i(ϵ_θ, ^s)} and {_i(_i, )} into a learned semantic projection head , to obtain the predicted semantic segmentation map ^s, ^s = (_i(ϵ_θ, ^s), _i(_i, )) Then, the semantic projection head is trained with the standard cross entropy loss, _s = CE(^s, ^s). During the training, the diffusion model ϵ_θ is frozen and only the semantic projection head is optimized, min__s. §.§ Prompting Diffusion Representations for Domain Generalization §.§.§ Generalization Capabilities of Diffusion-Pretraining Given the remarkable success of diffusion-pretraining for semantic segmentation, a natural inquiry arises: To what extent do diffusion-pretrained segmentation models maintain their effectiveness under severe domain shift? Motivated by this question, we first set out to investigate the diffusion-pretrained segmentation model's generalization performance in face of significant domain shift. In Table <ref>, we compare popular pretraining strategies: (1) supervised pretraining for image classification on ImageNet-22k <cit.>; (2) self-supervised pretraining for pixel reconstruction on ImageNet-1k <cit.>; (3) CLIP pretraining, consisting of contrastive visual-language pairing <cit.>; and (4) diffusion-driven pretraining for text-to-image synthesis on LAION-5B <cit.>. We analyze their generalization performance on GTA→Cityscapes by comparing it to the oracle performance for each network. The oracle is the same network trained in a fully supervised manner on the target Cityscapes dataset. Note that the reported oracle mIoU values may appear lower than those in the literature on supervised learning <cit.>, as we follow the established convention of prior DG and TTDA works that downsample the Cityscapes images by a factor of two, to ensure fair comparison. In all cases, we assess the model's performance on the Cityscapes validation set by reporting the mean intersection-over-union (mIoU). To evaluate the generalization ability of each model, we present the relative mIoU compared to the oracle, following <cit.>. Interestingly, higher oracle performance does not necessarily equate to better generalization on unseen target domains. This indicates that certain backbone models struggle with overfitting and do not effectively capture domain-invariant knowledge. However, we observe that the model using diffusion pretraining achieves superior performance in both absolute (49.2 mIoU) and relative (65.9% of the oracle) generalization metrics. This demonstrates an exceptional generalization ability compared to other pretraining strategies. This remarkable generalization performance reached by the vanilla diffusion pretrained segmentation model, encourages us to further investigate their potential benefits in domain adaptation and generalization. The purpose of this work is to develop simple yet effective method for domain adaptation and generalization problems, without any complex tricks, such as image translation, data augmentation and class sampling. Building upon the characteristics of diffusion models discussed in Sec. <ref> and Sec. <ref>, we note that these models are distinguished by their capacity to be finely controlled by the conditioning input , derived from image or text prompts. Different from previous methods, that change the backbone behaviors by modulating specific networks layers, stylizing images or introducing additional networks, prompts tuning opens a new avenue of manipulating the backbones representation in a effective and efficient way. In the next sections, we propose novel prompt tuning methods for domain generalization and test-time domain adaptation, based on diffusion-pretrained semantic segmentation models. An overview of our prompt-based approach is depicted in Fig. <ref>. §.§.§ Category and Scene Prompt To improve the generalization ability of diffusion-pretrained segmentation models, we first introduce the category prompt _c and the scene prompt _s as the conditioning inputs = [_c; _s]. These are used to disentangle the domain-invariant features, such as object classes and scene layout, and the domain-variant features, such as object color and texture, scene style and lighting. To further harvest and utilize the domain-invariant knowledge to enhance the generalization ability, we propose the prompt randomization strategy during training, to enforce consistency of predictions on the same image and category prompt amidst varying scene prompts. Category Prompt. The category prompt is typically defined as a template of "", where "" are category names (road, sidewalk and sky for the street scene image). I.e., the category prompt only provides the class names as the guidance, to extract the domain-invariant knowledge. For instance, using the "car" class as an example, diffusion models can synthesize car images with varying attributes by providing different prompts. However, despite the diverse attribute inputs, the fundamental identity of the object as a car remains unchanged as long the prompts include "". This highlights the ability of category prompts to effectively capture knowledge related to the object's core identity, i.e. “what is a car?”, from other attributes, such as color or body type. The core identity of the object is domain-invariant and exactly what the domain generalization needs. For the C-class semantic segmentation, the category prompts are C tokens, each of which is N-dim vector. Scene Prompt. The category prompt can capture the main features of an object that stay the same across different scenarios, domain-invariant knowledge. To better extract domain invariant representations, we further condition the network on an introduced scene prompt. Our hypothesis is that the diffusion network can better extract domain-invariant representations if it is aware of the image domain. Consider e.g. a night photo of a street scene. It might be difficult to recognize objects, such as cars, pedestrians, and buildings in such conditions. However, by making the diffusion representation explicitly aware of the conditions through a style prompt “A dark night photo”, we believe that it can partly revoke the domain-specific effect as it will explicitly consider a night-time view of a car, pedestrian, or building. Thus, to further facilitate the extraction of domain-invariant knowledge across different domains, we introduce the scene prompt, _s. One example of scene prompt is a template "", "a GTA5 photo" or "a night photo". By combining the category prompt _c and the scene prompt _s as the conditional inputs, the predicted semantic segmentation map in Eq. (<ref>) is rewritten as, ^s = (_i(ϵ_θ, ^s), _i(_i, [_c;_s])) Note that the scene prompt _s can not only be defined as the aforementioned text template, but also be designated as a N-dim learnable prompt, or an image prompt obtained by feeding an example image into pretrained language-image encoder (CLIP <cit.>). With the category and scene prompts employed, there are M=C+1 tokens of N-dim vector in total as the conditioning inputs . §.§.§ Prompt Randomization for Domain Generalization By incorporating the category and scene prompts as conditional inputs, the diffusion-pretraining segmentation model is able to extract domain-invariant knowledge, leading to enhanced generalization ability. To further capture the domain-invariant knowledge and boost the domain generalization capabilities, we propose a prompt randomization strategy. Our idea is to enforce consistency between the semantic predictions under different scene prompts {_s^k}_k=1^K. The intuition is that a model capable of generalizing well would make similar predictions for images containing the same content, irrespective of their domain-variant attributes, such as weather or style. By feeding various scene prompts {_s^k}_k=1^K into the diffusion backbone in Sec. <ref>, the corresponding semantic segmentation maps are obtained as {^s_k}_k=1^K, where ^s_k = (_i(ϵ_θ, ^s), _i(_i, [;_k])). Then, the consistency loss, _c, between different scene prompts are formulated as, _c = ∑_p,q∈{1,...,K},q≠ p KL(^s_p||^s_q) = -∑_p,q∈{1,...,K},q≠ p^s_plog^s_p/^s_q, where KL(·||·) represents the Kullback–Leibler (KL) divergence <cit.>, which aligns the semantic prediction under different scene prompts. The complete learning objective for prompt randomization is the combination of the semantic segmentation loss _s and the consistency loss _c, written as, _total = ∑_k=1^KCE(^s_k, ^s) + λ_c. Here, λ is the hyper-parameter used to balance the semantic segmentation loss and the consistency loss, which is set to 0.1 in this work. §.§ Prompting Diffusion Representations for Test-Time Domain Adaptation §.§.§ Test-Time Domain Adaptation In Sec. <ref>, we introduce category and scene prompts to extract domain-invariant knowledge and improve the generalization ability of the diffusion-pretraining semantic segmentation model across different domains. The domain generalization strategy utilizes only the labeled source domain data ^s, ^s without accessing the unlabeled target domain data ^t. A natural next question is thus: Can the model be effectively adapted given the unlabeled target domain data during test-time, test-time domain adaptation? Following that, we examine how well the diffusion-pretraining semantic segmentation models can adapt to new domains at test-time, by leveraging ^t. Test-time domain adaptation (TTDA) presents two main challenges that must be addressed: (1) how can the source-domain initialized model be modulated effectively in light of unsupervised learning objective fraught with noise? (2) What learning objectives should be adopted to enable optimization if only unlabeled data from the target domain is provided? Our work primarily addresses challenge (1), and employs a pseudo-label based optimization objective for challenge (2) as it is proven simple yet effective in the test-time domain adaptation field. Next, we propose the prompt tuning based method for test-time domain adaptation, with diffusion pretraining semantic segmentation models. §.§.§ Prompt Tuning for Test-Time Domain Adaptation Modulation Parameters. To effectively tackle the aforementioned challenge (1) in TTDA, the main focus is on identifying the relevant parameters that need to be updated in order to control the behavior of the backbone in a desirable manner. Our diffusion pretraining models, described in Sec. <ref>, leverage the category and scene prompts to effectively control the behavior of the backbone. More specifically, the category prompt, _c, captures domain-invariant knowledge on the object core identity, shared by the source and target domains. The scene prompt, _s, introduces the domain-specific information to further help disentangeling the representation. Test-time domain adaptation involves a domain shift from the source to the target domain. Therefore, the scene prompts needs to be updated to accommodate this shift in domains. The basic idea of our prompt tuning for TTDA is to learn the scene prompt _s to facilitate adaptation from the source to the target domain. That is, the scene prompt serves as the modulation parameter, which can be updated by θ^t←θ^s :←_s+Δ_s. Learning Objective. Our test-time optimization objective _t is to tune the scene prompt supervised by the pseudo-label ^t=(_i(ϵ_θ, ^t), _i(_i, [_c;_s])), formulated as, _t = CE(^t, ^t) , _s←_s+∂_t/∂_s. The only optimized parameters during test-time is the scene prompt _s, which is a N-dim vector and set as 768-dim in this work. Thus, our prompt-tuning method for TTDA is parameter-efficient, enabling fast adaptation and helping to mitigate the risk of overfitting during the TTDA process. § EXPERIMENTS §.§ Experimental Setup and Implementation Details Datasets. We evaluate the effectiveness of our proposed prompt-based method for DG and TTDA under different scenarios, including synthetic-to-real and clear-to-adverse benchmarks. We use the conventional notation A→B to describe the DG and TTDA task, where A and B are source and target domain, respectively. Synthetic-to-Real: There are two settings, GTA <cit.> → Cityscapes <cit.> and SYNTHIA <cit.> → Cityscapes <cit.>. Clear-to-Adverse: There are also two tasks, Cityscapes <cit.> → ACDC <cit.> and Cityscapes <cit.> → Dark Zurich <cit.>. For ease of reference, we use the following abbreviations throughout the text: G→C, S→C, C→A, and C→D, respectively. More detailed description about different datasets is put in the supplementary. Implementation Details. Backbone and Semantic Segmentation Head: We use the released v1-5 version of Stable Diffusion <cit.> as the backbone, and the Semantic FPN <cit.> decoder as the segmentation head. Prompts: We leverage the publicly available pretrained ViT-L/14 CLIP <cit.> model to map text/image prompts into the feature space that can be utilized by Stable Diffusion. Scene Prompt: The scene prompt for prompt randomization by default is composed of two components: (1) the text description for the source domain, such as "a GTA5 photo" for the GTA dataset, and (2) the text description for the target domain, such as "a night photo" for the Dark Zurich dataset, called the text prompt version. As an alternative ot (2), we also experiment with an image from the target domain, the image prompt version. Specific prompts used for each experiment are put in the supplementary. Test-Time Domain Adaptation: The model is firstly pre-trained on the source domain with both category prompt and scene prompt (see Sec. <ref> and Source(_s) in Table <ref>), and then updated during the test-time with our proposed prompt tuning strategy in Sec. <ref>. §.§ Comparison with state-of-the-art Domain Generalization. In Sec. <ref>, we propose the prompt randomization strategy for DG with diffusion pretraining models. As shown in Table <ref>, our prompt randomization method is demonstrated to outperform previous SOTA DG methods by a significant margin. Notably, scene prompts for prompt randomization can be obtained flexibly in different types, including text (DG-T) and image (DG-I) prompts. And both types are proven effective, improving the vanilla diffusion models (Van.) performance significantly. Test-time domain adaptation. In Sec. <ref>, we propose the prompt tuning strategy to adapt the diffusion representation during test time. The results in Table <ref> demonstrate the superior performance of our prompt tuning method for TTDA, achieving a remarkable improvement of 5.9%, 6.5%, 2.7%, and 4.9% over existing TTDA methods on different benchmarks. Unsupervised domain adaptation. As show in Table <ref>, our proposed DG methods (DG-T and DG-I) and TTDA method exhibit exceptional performance, outperforming even the strong unsupervised domain adaptation (UDA) methods that have access to both the source and target domain at the same time for training. Notably, our DG method achieves a performance gain of 5.8% over the strong UDA method, DAFormer, on the Cityscapes→ACDC benchmark, despite not utilizing any data from ACDC. Fig. <ref> presents qualitative comparisons among our DG, TTDA, and DAFormer results, illustrating the effectiveness of our prompt-based method. For instance, when examining the "road" class, we observe that without prompt conditioning, DAFormer segments the "road" into the upper sky region. In contrast, our category prompt conditioning, specifically "a photo of a road," assists the model in learning that the "road" should appear in the lower part of the image rather than the upper sky region, thus preventing this error from occurring. §.§ Ablation Study and Prompts Analysis Different Scene Prompts Comparison. We conduct ablation experiments to evaluate the effectiveness of our proposed scene prompt in improving the domain generalization performance of diffusion pretraining models. With and Without Scene Prompt: First, we compare the performance of models with and without scene prompt to verify its effectiveness. As shown in Table <ref>, the models with any of the different scene prompts (target, learned, and source) all outperformed the baseline model without scene prompt, achieving mIoU scores of 50.9%, 51.4%, and 51.4% respectively, compared to 49.2% for the baseline on the GTA→Cityscapes benchmark. Different Scene Prompts: Next, we investigate the impact of different scene prompt choices by comparing the prompts obtained from 1) text description of target domain, referred to as "Target"; 2) text description of source domain, referred to as "Source"; and 3) a learnable parameter, referred to as "Learned". Our results show that the "Source" scene prompt outperforms the "Target" and "Learned" prompts on different benchmarks. This finding confirms our statement in Sec. <ref> that the scene prompt is used to disentangle domain-invariant knowledge and revoke the effect of domain-variant factors in the source domain. Hence, the "Source" prompt, which captures domain-variant factors in the source domain, works best. Increased Number of Class Prompts. To assess the impact of classes in category prompts, we conduct an experiment in which more classes were utilized in the category prompt. More specifically, in all GTA→Cityscapes experiments in this work, the category prompt consisting of 19 classes is used. To obtain the category prompt with more classes, we utilize 150 classes from the ADE20K dataset <cit.>. This expanded set of classes not only includes the 19 object classes used in our standard category prompt, but also encompasses a variety of additional classes. Our analysis in Table <ref> demonstrates that increasing the number of classes in the category prompt leads to a improvement in the generalization ability of diffusion pretraining semantic segmentation models, 50.1% 49.2%. This improvement can be attributed to the fact that providing a greater number of classes as the category prompt enables the diffusion representations to more accurately distinguish between a broader range of class objects, thereby mitigating the issue of mis-classification. This finding suggests a promising direction for future work to further improve the generalization ability of diffusion pretraining semantic segmentation models by incorporating more auxiliary classes into the category prompt. [b]0.43 < g r a p h i c s > figureQualitative comparisons between our DG, TTDA and DAFormer <cit.> method. DAFormer is a UDA method, which uses the full target domain data during training. [b]0.55 tableAblation study and comparisons between different prompts types, under synthetic-to-real and clear-to-adverse benchmarks. Results are evaluated on the val-set of target domain. ! b]l|c|ccc|ccc [gray]0.85 Setting w/o _s Target (_s) Learned (_s) Source (_s) TTDA DG-T DG-I G→C 49.2 50.9 51.4 51.4 52.2 52.0 52.0 S→C 47.8 48.4 48.2 48.8 49.5 49.1 49.3 C→D 31.2 32.2 30.4 32.8 37.0 34.0 34.0 C→A 57.0 57.0 58.0 58.0 58.5 58.6 58.4 Prompt Randomization with Irrelevant Prompts. In our primary experiments, the scene prompt is generated from text/image prompts that are relevant to the target domain. However, to evaluate the flexibility and scalability of the scene prompt, we conduct an experiment in which the scene prompt is generated from a random, unrelated scene description. For instance, in the GTA→Cityscapes experiment, we used scene prompts such as "a sand photo," "a grass photo," "a painting photo," and "a water photo." The results presented in Table <ref> demonstrate that the prompt randomization method using irrelevant prompts performs favorably in comparison to the method using target-relevant prompts, achieving 51.7% and 52.0%, respectively. § CONCLUSION In this work, we conduct the first study on the generalization performance of a semantic segmentation model utilizing pretrained diffusion representations, demonstrating their superior performance compared to other pretraining backbones. To further enhance the model's domain generalization capability, we introduce novel prompt-based methods: the scene prompt and prompt randomization. Additionally, we propose a prompt tuning method that enables efficient and effective test-time domain adaptation of the model. Through extensive experiments conducted on four benchmarks, we validate the effectiveness of our proposed simple yet powerful approach. Limitations. The current approach employs hand-designed prompts. An interesting future direction is to leverage other large language models to automatically generate accurate prompts for our method. ieee_fullname [pages=-]supplementary.pdf
http://arxiv.org/abs/2307.04761v1
20230701205401
Understanding Counterspeech for Online Harm Mitigation
[ "Yi-Ling Chung", "Gavin Abercrombie", "Florence Enock", "Jonathan Bright", "Verena Rieser" ]
cs.CL
[ "cs.CL", "cs.AI", "cs.CY" ]
Primal-Dual Gradient Methods for Searching Network Equilibria in Combined Models with Nested Choice Structure and Capacity Constraints [ ====================================================================================================================================== Counterspeech offers direct rebuttals to hateful speech by challenging perpetrators of hate and showing support to targets of abuse. It provides a promising alternative to more contentious measures, such as content moderation and deplatforming, by contributing a greater amount of positive online speech rather than attempting to mitigate harmful content through removal. Advances in the development of large language models mean that the process of producing counterspeech could be made more efficient by automating its generation, which would enable large-scale online campaigns. However, we currently lack a systematic understanding of several important factors relating to the efficacy of counterspeech for hate mitigation, such as which types of counterspeech are most effective, what are the optimal conditions for implementation, and which specific effects of hate it can best ameliorate. This paper aims to fill this gap by systematically reviewing counterspeech research in the social sciences and comparing methodologies and findings with computer science efforts in automatic counterspeech generation. By taking this multi-disciplinary view, we identify promising future directions in both fields. § INTRODUCTION The exposure of social media users to online hate and abuse continues to be a cause for public concern. Volumes of abuse on social media continue to be significant in absolute terms <cit.>, and some claim they are rising on platforms such as Twitter where at the same time content moderation appears to be becoming less of a priority <cit.>. Receiving abuse can have negative effects on the mental health of targets, and also on others witnessing it <cit.>. In the context of public figures the impact on the witnesses (bystanders) is arguably even more important, as the abuse is potentially witnessed by a large volume of people. In addition, politicians and other prominent actors are driven out of the public sphere precisely because of the vitriol they receive on a daily basis <cit.>, raising concerns for the overall health of democracy. Within this context, research on mechanisms for combating online abuse is becoming ever more important. One such research angle is the area of “counterspeech” (or counter-narratives): content that is designed to resist or contradict abusive or hateful content <cit.>, also see Figure <ref>. Such counterspeech (as we will elaborate more fully below) is an important potential tool in the fight against online hate and abuse as it does not require any interventions from the platform or from law enforcement, and may contribute to mitigating the effects of abuse <cit.> without impinging on free speech. Several civil organisations have used counterspeech to directly challenge hate, and Facebook has launched campaigns with local communities and policymakers to promote accessibility to counterspeech tools.[<https://counterspeech.fb.com/en/>] Similarly, Moonshot and Jigsaw implemented The Redirect Method, presenting alternative counterspeech or counter videos when users search queries that may suggest an inclination towards extremist content or groups.[<https://moonshotteam.com/the-redirect-method/>] The detection and generation of counterspeech is important because it underpins the promise of AI-powered assistive tools for hate mitigation. Identifying counterspeech is vital also for analytical research in the area: for instance, to disentangle the dynamics of perpetrators, victims and bystanders <cit.>, as well as determining which responses are most effective in combating hate speech <cit.>. Automatically producing counterspeech is a timely and important task for two reasons. First, composing counterspeech is time-consuming and requires considerable expertise to be effective <cit.>. Recently, large language models have been able to produce fluent and personalised arguments tailored to user expectations addressing various topics and tasks. Thus, developing counterspeech tools is feasible and can provide support to civil organisations, practitioners and stakeholders in hate intervention at scale. Second, by partially automating counterspeech writing, such assistive tools can lessen practitioners' psychological strain resulting from prolonged exposure to harmful content <cit.>. However, despite the potential for counterspeech, and the growing body of work in this area, the research agenda remains a relatively new one, which also suffers from the fact that it is divided into a number of disciplinary silos. In methodological terms, meanwhile, social scientists studying the dynamics and impacts of counterspeech <cit.> often do not engage with computer scientists developing models to detect and generate such speech <cit.> (or vice versa). The aim of this review article is to fill this gap, by providing a comprehensive, multi-disciplinary overview of the field of counterspeech covering computer science and the social sciences over the last ten years. We make a number of contributions in particular. Firstly, we outline a definition of counterspeech and a framework for understanding its use and impact, as well as a detailed taxonomy. We review research on the effectiveness of counterspeech, bringing together perspectives on the impact it makes when it is experienced. We also analyse technical work on counterspeech, looking specifically at the task of counterspeech generation, scalability, and the availability and methodology behind different datasets. Importantly, across all studies, we focus on commonalities and differences between computer science and the social sciences, including how the impact of counterspeech is evaluated and which specific effect of hate speech it best ameliorates. We draw on our findings to discuss the challenges and directions of open science (and safe AI) for online hate mitigation. We provide evidence-based recommendations for automatic approaches to counterspeech tools using Natural Language Processing (NLP). Similarly, for social scientists, we set out future perspectives on interdisciplinary collaborations with AI researchers on mitigating online harms, including conducting large-scale analyses and evaluating the impact of automated interventions. Taken together, our work offers researchers, policy-makers and practitioners the tools to further understand the potentials of automated counterspeech for online hate mitigation. § BACKGROUND Interest in investigating the social and computational aspects of counterspeech has grown considerably in the past five years. However, while extant work reviews the impact of counterspeech on hate mitigation <cit.>, none have systematically addressed this issue in combination with computational studies in order to synthesise social scientific insights and discuss the potential role of automated methods in reducing harms. <cit.> present a focused (2016-2018) systematic review of research into the impact of counter-narratives on prevention of violent radicalisation. They categorise the techniques employed in counter-narratives into four groups: (1) counter-stereotypical exemplars (challenging stereotypes, social schema or moral exemplars), (2) persuasion (e.g., through role-playing and emotion inducement), (3) inoculation (proactively reinforcing resistance to attitude change or persuasion), and (4) alternative accounts (disrupting false beliefs by offering different perspectives of events). The measurements of counter-narrative interventions are based on (1) intent of violent behaviour, (2) perceived symbolic/realistic group threat (e.g., perception of an out-group as dangerous), and (3) in‐group favouritism/out‐group hostility (e.g., level of trust, confidence, discomfort and forgiveness towards out-groups). They argue that counter-narratives show promise in reducing violent radicalisation, while its effects vary across techniques, with counter-stereotypical exemplars, inoculation and alternative accounts demonstrating the most noticeable outcomes. <cit.> reviews the research into the effectiveness of counterspeech, attempting to categorise different forms of counterspeech, summarise the source of influences in abusive/positive behaviour change, and elucidate the reasons which drive strangers to intervene in cyberbullying. Here, the impact of counterspeech is mostly evaluated by the people involved in hateful discussions, including hateful speakers, audiences, and counterspeakers. In comparison, we focus on what makes counterspeech effective by comprehensively examining its use based on aspects such as strategies, audience and evaluation. On the computational side, some work reviews the use of counterspeech in social media using natural language processing, including work outlining counterspeech datasets <cit.>, discussing automated approaches to counterspeech classification <cit.> and generation <cit.>, and work focusing on system evaluation <cit.>. However, work from computer sciences is not typically informed by important insights from the social sciences, including the key roles of intergroup dynamics, the social context in which counterspeech is employed, and the mode of persuasion by which counterspeech operates. Taking an interdisciplinary approach, we join work from the computer and social sciences. § REVIEW METHODOLOGY Taking a multi-disciplinary perspective, we systematically review work on counterspeech from computer science and the social sciences published in the past ten years. To ensure broad coverage and to conduct a reproducible review, we follow the systematic methodology of <cit.>. The search and inclusion process is shown in Figure <ref>. We used keyword terms related to counterspeech to search three key databases (ACL Anthology, ArXiv, and Scopus) that together offer a broad coverage of our target literature. We included the search terms `counter-speech', `counter-narratives', `counter-terrorism', `counter-aggression', `counter-hate', `counter speech', `counter narrative', `countering online hate speech', `counter hate speech', and `counter-hate speech'. We also included 34 publications that we had identified previously from other sources, but that were not returned by keyword search due to not including relevant keywords or not being indexed in the target search repositories. The search was conducted in December 2022. Of the returned results, we include all publications that concern (1) analysis of the use and effectiveness of interventions against hateful or abusive language online, (2) characteristics of counterspeech users or recipients, or (3) data and/or implementation designed for counterspeech (e.g., counterspeech classification or generation). These inclusion criteria were applied by two of the authors. Following this process, we include 90 papers for analysis in this review. Each of the papers was read by at least one of the co-authors of the article. Our review is divided into several sections (the results of which are presented sequentially below). First, we examine definitional characteristics of counterspeech, looking at how the term itself is defined, how different taxonomies have been created to classify different types of counterspeech, and the different potential purposes attributed to it. Then, we examine studies that have looked at the impact of counterspeech, discussing the different analytical designs employed and analysing evidence of the results. Following this, we discuss computational approaches to counterspeech, focusing in particular on both detection and generation. Finally, we examine ethical issues in the domain of counterspeech, and also speculate about future perspectives and directions in the field. § DEFINING COUNTERSPEECH Counterspeech is multifaceted and can be characterised in several different ways. In Table <ref> we outline a framework for describing and designing counterspeech, covering who (speaker) sends what kinds of messages (strategies) to whom (recipients), and for what purpose (purpose). Using this structure, we summarise how counterspeech has typically been categorised in past studies. Most studies in the field use one of three main terms: counterspeech, counter-narratives <cit.> and hope speech <cit.>. These three terms broadly refer to a similar concept: content that challenges and rebuts hateful discourse and propaganda <cit.> using non-aggressive speech <cit.>. There are some differences between the terms. <cit.> considers counter-narratives as intentional strategic communication within a political, policy, or military context. Additionally, the term counter-narrative also refers to narratives that challenge a much broader view or category such as forms of education, propaganda, and public information <cit.>. Such counter-narratives are often discussed in the context of the prevention of violent extremism. Hope speech, meanwhile, could be seen as a particular type of counterspeech: it promotes positive engagement in online discourse to lessen the consequences of abuse, and places a particular emphasis on delivering optimism, resilience, and the values of equality, diversity and inclusion <cit.>. In this paper, we review work that relates to all of these three concepts, and largely make use of the catch-all term counterspeech, while acknowledging the slight differences between the concepts. §.§ Classifying counterspeech Researchers have identified a variety of different types of counterspeech. Here, we outline four main ways in which counterspeech can vary, in terms of the identity of the counterspeaker, the strategies employed, the recipient of the counterspeech and the purpose of counterspeech. Counterspeakers (who) Psychological studies show that the identity of a speaker plays a key role in how large an audience their message reaches and how persuasive the message is. Common crucial factors include group identity (such as race, religion, and nationality), level of influence, and socioeconomic status. For instance, counterspeech provided by users with large numbers of followers and from an in-group member is more likely to lead to changes in the behaviour of perpetrators of hate <cit.>. Some studies characterise individuals who use counterspeech and suggest that these users exhibit different characteristics and interests than users who spread hate <cit.>. Through lexical, linguistic and psycholinguistic analysis of users who generate hate speech or counterspeech on Twitter, <cit.> find that counterspeakers are higher in agreeableness, displaying traits such as altruism, modesty, and sympathy, and display higher levels of self-discipline and conscientiousness. Possibly driven by a motive to help combat hate speech, counterspeakers tend to use words related to government, law, leadership, pride, and religion. Regarding the impact of being a counterspeaker, in an ethnographic study, members of a counterspeech campaign reported feeling more courageous and keen to engage in challenging discussions after expressing opinions publicly <cit.>. Strategies (how) Counterspeech can take many forms. <cit.> first identify eight types of counterspeech used on Twitter: (1) presentation of facts, (2) pointing out hypocrisy or contradiction, (3) warning of consequences, (4) affiliation [i.e. establishing an emotional bond with the perpetrators or targets of hate], (5) denouncing, (6) humour/sarcasm, (7) tone [a tendency or style adopted for communication, e.g., empathetic and hostile], and (8) use of media. Based on this taxonomy, follow-up studies on counterspeech make minor modifications to cover strategies in a broader scope. <cit.> analyzed and classified counterspeech on Twitter, taking <cit.>'s taxonomy but dropping the use of media and adding hostile language and positive tone, which replaces general strategy tone. Similarly, <cit.> collected and annotated counterspeech comments from Youtube, adopting <cit.>'s taxonomy but excluding tone and adding positive tone, hostile language and miscellaneous. <cit.> collaborated with NGOs to collect manually written counterspeech. For data annotation, they followed the taxonomies provided by <cit.> and <cit.>, while adding counter question and discarding the use of media. Counterspeech examples for each strategy are provided in Table <ref>. Counterspeech recipients (whom) Depending on the purpose of the counterspeech, the target audience may be perpetrators, victims or bystanders (see Figure <ref>). Identifying the appropriate target audience or ‘Movable Middle’ is crucial to maximise the efficacy of counterspeech. Movable middle refers to individuals who do not yet hold firm opinions on a topic and can hence be potentially open to persuasion. They are also receptive to arguments and more willing to listen. These individuals often serve as ideal recipients of messages addressing social issues such as vaccination hesitancy <cit.>. In the context of counterspeech, previous studies show that a small group of counterspeakers can shape online discussion when the audience holds moderate views <cit.>. <cit.> group counterspeech acts into four categories based on the number of people involved in the discussion: one-to-one, one-to-many, many-to-one, or many-to-many. Some successful cases where counterspeech induces favourable changes in the discourse happen in a one-to-one discussion. This allows for dedicated opinion exchange over an ideology, which in some cases even yields long-lasting changes in beliefs. The use of hashtags is a good example of one-to-many and many-to-many interaction where conversations surge quickly <cit.>. For instance, Twitter users often include hashtags to express support (e.g., #BlackLivesMatter) or disagreement with haters (e.g., #StopHate) to demonstrate their perspective. The purpose of counterspeech Hateful language online can serve to reinforce prejudice <cit.>, encourage further division, promote power of the ingroup, sway political votes, provoke or justify offline violence, and psychologically damage targets of hate <cit.>. Just as the effects of hate are wide-ranging, counterspeech may be used to fulfil a variety of purposes. ∙ Changing the attitudes and behaviours of perpetrators In directly challenging hateful language, one key aim of counterspeech can be to change the attitudes of the perpetrators of hate themselves. The strategy here is often to persuade the perpetrator that their attitudes are mistaken or unacceptable, and to deconstruct, discredit or delegitimise extremist narratives and propaganda <cit.>. Counterspeech aimed at changing the attitudes of spreaders of hate may address the hate speaker directly, countering claims with facts or by employing empathy and affiliation. Challenging attitudes is often seen as a stepping stone to altering behaviours <cit.>. In attempting to change the minds of perpetrators, counterspeakers ultimately hope to discourage associated behaviours such as sharing such content again in the future or showing support for other hateful content (i.e., stopping the spread of hate). In changing the minds of perpetrators, counterspeakers may also hope to prevent them from engaging in more extreme behaviours such as offline violence. ∙ Changing the attitudes and behaviours of bystanders More commonly, counterspeech is initiated with the intention of reaching the wider audience of bystanders rather than perpetrators of hate themselves <cit.>. These bystanders are not (at least yet) generating hateful language themselves, but rather are people exposed to hateful content either incidentally or by active engagement. Here, counterspeakers hope to persuade bystanders that the hateful content is wrong or unacceptable, again by deconstructing and delegitimising the hateful narrative. The strategy here may be to offer facts, point out hypocrisy, denounce the content, or use humour to discredit the speaker. Additionally, counterspeakers will often invoke empathy for targets of hate. In preventing bystanders from forming attitudes and opinions in line with the hateful narrative, counterspeakers hope to mitigate further intergroup division and related behaviours such as support for or engagement with additional abuse or physical violence. Counterspeakers may also hope to encourage others to generate rebutals and rally support for victims <cit.>, bringing positive changes in online discourse. ∙ Showing support for targets of hate A third key way in which counterspeech functions is to show support directly to targets of hate. Online abuse can psychologically damage the wellbeing of targets and leave them feeling fearful, threatened, and even in doubt of their physical safety <cit.>. By challenging such abuse, counterspeakers can offer support to targets and encourage bystanders to do the same <cit.>. This support aims to alleviate negative emotion brought on by hate by demonstrating to targets that they are not alone and that many people do not hold the attitudes of the perpetrator. Here the particular strategies may be to denounce the hate and express positive sentiment towards the target group. Intergroup solidarity may in turn reduce retaliated antagonism. § THE IMPACT OF COUNTERSPEECH The concrete effects of using counterspeech remain debated. The methods applied for evaluating the effectiveness of counterspeech vary considerably across studies in the field. In this section, we outline eight aspects that can help to better understand the impact of counterspeech. Research design A wide range of methodologies have been adopted to assess the impact of counterspeech on hate mitigation, including observational studies <cit.>, experimental <cit.> and quasi-experimental designs <cit.>. In observational studies, investigators typically assess the relationship between exposure to counterspeech and outcome variables of interest without any experimental manipulation. For instance, a longitudinal study of German political conversations on Twitter examined the interplay between organized hate and counterspeech groups <cit.>. There is also an ethnographic study interviewing counterspeakers on Facebook to understand external and internal practices for collectively intervening in hateful comments, such as how to build effective counterspeech action and keep counterspeakers engaged <cit.>. For experimental and quasi-experimental designs, both aim at estimating the causal effects of exposure to different kinds of counterspeech on outcome variables in comparison with controls (no exposure to counterspeech). Languages and countries In the reviewed work, the impact of counterspeech is investigated in five different languages across nine countries. Notably, experiments are focused on counterspeech used in Indo-European languages such as English (USA, UK, Canada and Ireland), German (Germany), Urdu (Pakistan) and Swedish (Sweden). Only two studies are dedicated to Afro-Asiatic languages, Arabic (Egypt and Iraq). We did not find research dedicated to other language families, suggesting that the language coverage of counterspeech studies is still low. Platforms Most experiments were conducted on text-based social media platforms, such as eight on Twitter <cit.>, six on Facebook <cit.>, and one on Reddit <cit.>, as well as image-based online spaces, such as three on Youtube <cit.> and one on Instagram <cit.>. Often, the counterspeech interventions are directly monitored on such platforms, but in some cases, fictitious platforms are created in order to mimic online social activity under a controlled environment <cit.>. There are three studies analysing the impact of counterspeech across multiple platforms <cit.>. Twitter and Facebook are widely used for measuring the effects of counterspeech, with eight and six experiments respectively. For Twitter, this can be explained by its easily accessible API (even if at the time of writing continued research access to the API was in doubt). Similarly, because of difficulties in gathering data, <cit.> resort to developing an agent-based computational model for simulating hate mitigation with counterspeech on Facebook. It is worth highlighting that none of the studies we reviewed had investigated recently popular mainstream platforms, such as Tiktok, Weibo, Telegram, and Discord. The target of hate speech Abusive speech can be addressed towards many different potential targets, and each individual hate phenomenon may require different response strategies for maximum effectiveness. Existing studies have evaluated the effectiveness of counterspeech on several hate phenomena, with Islamophobia, Islamic extremism, and racism being the most commonly addressed, while hate against LGBTQ+ community and immigrants being the least studied. In these studies, abusive content is typically identified based on two strategies - hateful keyword matches <cit.>, or user accounts (e.g., content produced by known hate speakers) <cit.>. Types of interventions A wide range of methods are exploited to design and surface counterspeech messages to a target audience. We broadly categorise these methods based on modality and approach to creation. Counter speech is generally conveyed in text <cit.> or video mode <cit.>. In both cases, counterspeech materials can be created in three different ways: written by experimenters as stimuli <cit.>, as well as written by individuals or campaigns that are collected from social media platforms <cit.>. We also found one study integrating counterspeech messages in media such as films, TV dramas and movies <cit.>. Counterspeech strategies Following the strategies summarised in Section <ref>, commonly used counterspeech strategies include facts <cit.>, denouncing <cit.>, counter-questions <cit.>, and a specific tone (humour or empathy) <cit.>. There are more fine-grained tactics for designing counterspeech in social science experiments. According to psychological studies, the use of social norms can reduce aggression and is closely related to legal regulation in society <cit.>. This tactic was tested in an intervention study where participants were exposed to counterspeech with one of the inducements of empathy, descriptive norms (e.g., Let's try to express our points without hurtful language) and prescriptive norms (e.g., Hey, this discussion could be more enjoyable for all if we would treat each other with respect.) <cit.>. <cit.> designed counterspeech based on substances rather than tactics, varying three different narratives: (1) social (seeking to establish a better society), (2) political (bringing a new world order through a global caliphate), and (3) religious (legitimising violence based on religious purposes). Considering broader counterspeech components, a few organisations further focus on challenging ideology (e.g., far-right and Islamist extremist recruitment narratives), rather than deradicalising individuals <cit.>. Counterspeech drawing from personal stories in a reflective or sentimental tone is also considered as it can resonate better with target audiences <cit.>. In addition to neutral or positive counterspeech, radical approaches are taken by counter-objecting, degrading or shaming perpetrators in public for unsolicited harmful content <cit.>. Types of evaluation metrics Based on <cit.>'s counterspeech Handbook, we identified the following three types of metrics used by the authors of the papers to evaluate the effectiveness of counterspeech interventions: social impact, behavioural change, and attitude change measures. ∙ Social impact metrics are (usually automated) measurements of how subjects interact with counterspeech online. Such measures include, bounce rate, exit rate,[Bounce rate is the number of users who leave a website without clicking past the landing page; exit rate measures how many people leave the site from a given section <cit.>.] geo-location analysis and the numbers of likes, views, and shares that posts receive <cit.>. For example, for one of their experiments, <cit.> measure the `click-through rates' of Facebook users redirected from hateful to counterspeech materials, while <cit.> measure retweets and deletions (in addition to behavioural change measures). Social impact measures are also applied to synthetic data by <cit.>, who measure the `likes' of their (simulated) participants as hate and counterspeech propagate through a network (as well as applying behavioural metrics). Taking a more distant, long-term view, <cit.> cite Egypt's overall success at countering radicalisation with counterspeech campaigns by comparing its position on the Global Terrorism Index with that of Pakistan. While the majority of these measurements are automated, <cit.> use survey questions to examine participants willingness to intervene against hate speech depending on the severity of the hate, the number of bystanders, and the reactions of others. Unlike the survey-based approaches described below, they do not consider changes in attitude. In addition, <cit.> assess the success of the #jagärhär counterspeech campaign (#iamhere in English, a Sweden-based collective effort that has been applied in more than 16 countries) based on the extent to which it has facilitated the emergence of alternative perspectives. ∙ Behavioural change measures reveal whether subjects change their observable behaviour towards victims before and after exposure to counterspeech, for example in the tone of their language as measured with sentiment analysis. For instance, <cit.> conduct sentiment analysis to determine the behaviour of previously xenophobic accounts after treatment with counterspeech, <cit.> measure levels of verbal aggression before and after interventions, and <cit.> assess the proportion of hate speech in online discourse before and after the intervention of an organised counterspeech group. Other such measures are those of <cit.>, who compare the number of times users violate Facebook policies before and after exposure to counterspeech, and <cit.>, who examine the likelihood of Twitter users continuing to use racial slurs following sanctions by counterspeakers of varying status and demographics. And in a network simulation experiment, <cit.> measure the effect of positive or negative (synthetic) posts on (synthetic) user behaviour. ∙ Attitude change measures are used to assess whether people (hate/counter speakers or bystanders) change their underlying attitudes or intentions through non-automated methods such as interviews, surveys, focus groups, or qualitative content analysis. For potential hate speech perpetrators, <cit.> use psychological testing to measure the extent to which participants legitimized violence after exposure to differing counterspeech strategies, <cit.> compare support for ISIS and other factors using in participants exposed to differing counterspeech strategies and a control group, <cit.> code user comments on hate and counterspeech videos to perform qualitative content analysis of users' attitudes. For bystanders that may be potential counterspeakers, <cit.> use a survey to examine whether counterspeech leads to increased intentions to intervene. And for those already engaged in counterspeech, <cit.> conduct interviews with members of an organised group to reveal their perceptions of the efficacy of their interventions. Effectiveness Owing to the variation in experimental setups, aims, and evaluation methods of the counterspeech efforts we review, it is not straightforward to compare their levels of success. Indeed, several of the studies concern broad long-term goals that cannot be easily evaluated at all <cit.> or provide only anecdotal evidence <cit.>. Beyond this, evidence of successful counterspeech forms a complex picture. For example, <cit.> show that organised counterspeech is effective, but can produce backfire effects and actually attract more hate speech in some circumstances. They also show that these dynamics can alter surrounding societal events—although they do not make causal claims for this. Similarly, <cit.> find mixed results, with counterspeech encouraging discussion about hate phenomena and targets in some cases, but also leading to increases in hateful comments. However, <cit.> suggest that even such confrontational exchanges can be viewed as positive signs of engagement. There is some evidence for the comparative efficacy of different counterspeech strategies. <cit.> find that three of their intervention types (`disapproval', `abstract norm', `empathy') are effective in reducing verbal violence when compared with no intervention at all. Here, empathy had the weakest effect, which they put down to the empathetic messages being specific to particular behaviours, limiting their capacity to modify aggression towards wider targets. <cit.> also found that empathy-based counterspeech can consistently reduce hate speech, although this effect is small. And <cit.> found that counterspeech that seeks to correct false information in the hate speech actually leads to higher levels of violence legitimisation, while having participants actively counter terrorist rhetoric themselves (`Tailored Counter-Narrative') was the most effective strategy to reduce this. They found counterspeech to be more effective on participants that are already predisposed to cognitive reflection. However, focusing on the effect of factual correction on the victims rather than perpetrators of hate speech, <cit.> found it to be effective in providing support and preventing them from hating back and therefore widening the gap between groups. There is also some evidence that the numbers of the different actors involved in a counterspeech exchange can affect an intervention's success. <cit.> find that counterspeech can impact the online behaviour of (simulated) bystanders, with the effectiveness strongly influenced by the proportions of hate and counter speakers and neutral bystanders. According to their model, a small number of counterspeakers can be effective against smaller numbers of hate speakers in the presence of larger numbers of people lacking strong opinions. <cit.> found their counterspeech strategies to be effective only for higher risk individuals within the target populations, although they did not see any of the potential negative effects of counterspeech (such as increased radicalisation) reported elsewhere. Focusing on who in particular delivers counterspeech, <cit.> finds that success of counterspeech depends on the identity and status of the speaker. However, with only a small positive effect, <cit.> found that the content of counterspeech was more important than the source. And <cit.> found that, while organised counterspeech can be effective, the efforts of individuals can lead to increases in hate speech. In <cit.>, members of #jagärhär claim that their counterspeech interventions were successful in making space for alternative viewpoints to hate speech. § COMPUTATIONAL APPROACHES TO COUNTERSPEECH In this section, we switch the focus to look at literature on counterspeech emerging from the field of computer science. We tackle three subjects in particular: the datasets being used in these studies, approaches to counterspeech detection, and approaches to counterspeech generation. §.§ Counterspeech Datasets Approaches for counterspeech collection focus on gathering two different kinds of datasets: spontaneously produced comments crawled from social media platforms, and deliberately created responses aiming to contrast hate speech. In the first case, content is retrieved based on keywords/hashtags related to targets of interest <cit.> or from pre-defined counterspeech accounts <cit.>. In principle, due to the easily accessible API required for data retrieval, the majority of datasets are collected from social media platforms including Twitter <cit.>, and only a few are retrieved from Youtube <cit.> and Reddit <cit.>, respectively (though again it is worth noting that at the time of writing the Twitter API was starting to become a lot less accessible). In the second category, counterspeech is written by crowd workers <cit.> or operators expert in counterspeech writing <cit.>. While such an approach is expected to offer relatively controlled and tailored responses, writing counterspeech from scratch is time-consuming and requires human effort. To address this issue, advanced generative language models are adopted to automatically produce counterspeech <cit.>, as we will discuss further below. Regarding granularity of taxonomies, most existing datasets provide binary annotation (counterspeech/non-counterspeech) <cit.>, while three datasets feature annotations of the types of counterspeech <cit.>. In terms of hate incidents, datasets are available for several hate phenomena such as islamophobia <cit.> and East Asian prejudice during COVID-19 pandemic <cit.>. The aforementioned datasets are mostly collected and analyzed at the level of individual text, not at discourse or conversations (e.g., multi-turn dialogues <cit.>). Most of the datasets are in English, while only a few target multilinguality, including Italian <cit.>, French <cit.>, German <cit.>, and Tamil <cit.>. §.§ Approaches to Counterspeech Detection Previous work on counterspeech detection has focused on binary classification (i.e. whether a text is counterspeech or not) <cit.> or identifying the types of counterspeech as a multi-label task <cit.>. Automated classifiers are developed to analyse large-scale social interactions of abuse and counterspeech addressing topics such as political discourse <cit.> and multi-hate targets <cit.>. Moving beyond monolingual study, <cit.> evaluate the performance of pre-trained language models for categorising counterspeech strategy for English, Italian and French in monolingual, multilingual and cross-lingual scenarios. §.§ Approaches to Counterspeech Generation Various methodologies have been put forward for the automation of counterspeech generation <cit.>, addressing various aspects including the efficacy of a hate countering platform <cit.>, informativeness <cit.>, multilinguality <cit.>, politeness <cit.>, and grammaticality and diversity <cit.>. These methods are generally centred on transformer-based large language models (e.g., GPT-2 <cit.>). By testing various decoding mechanisms using multiple language models, <cit.> find that autoregressive models combined with stochastic decoding yield the optimal counterspeech generation. In addition to tackling hate speech, there are studies investigating automatic counterspeech generation to respond to trolls <cit.> and microagressions <cit.>. Evaluation of counterspeech generation Assessing counter speech generation is complex and challenging due to the lack of clear evaluation criteria and robust evaluation techniques. Previous work evaluates the performance of counterspeech systems via two aspects: automatic metrics and human evaluation. Automatic metrics, generally, evaluate the generation quality based on criteria such as linguistic surface <cit.>, novelty <cit.>, and repetitiveness <cit.>. Despite being scalable, these metrics are uninterpretable and can only infer model performance according to references provided (e.g., dependent heavily on exact word usage and word order) and gathering an exhaustive list of all appropriate counterspeech is not feasible. For this reason, such metrics cannot properly capture model performance, particularly for open-ended tasks <cit.> including counterspeech generation. As a result, human evaluation is heavily employed based on aspects such as suitableness, grammatical accuracy and relevance <cit.>. Despite being trusted and high-performing, human evaluation has inherent limitations such as being costly, difficult (e.g., evaluator biases and question formatting), and time-consuming (both in terms of evaluation and moderator training), and can be inconsistent and inflict psychological harm on the moderators. The effectiveness of counterspeech generations should be also carefully investigated `in-the-wild' to understand its social media impact, reach of content, and the dynamics of hateful content and counterspeech (see Section <ref>). To our knowledge, no work has examined this line of research yet. Potentials and limits of existing generative models We believe that in some circumstances counterspeech may be a more appropriate tool than content moderation in fighting hate speech as it can depolarise discourse and show support to victims. However, automatic counterspeech generation is a relatively new research area. Recent progress in natural language processing has made large language models a popular vehicle for generating fluent counterspeech. However, counterspeech generation currently faces several challenges that may constrain the development of efficient models and hinder the deployment of hate intervention tools. Similar to the use of machine translation and email writing tools, we advocate that counterspeech generation tools should be deployed as suggestion tools to assist in hate countering activity <cit.>. ∙ Faithfulness/Factuality in generation Language models are repeatedly reported to produce plausible and convincing but not necessarily faithful/factual statements <cit.>. We refer to faithfulness as being consistent and truthful in adherence to the given source (i.e. model inputs) <cit.>. Many attempts have been made to mitigate this issue <cit.>, including correcting unfaithful data <cit.>, augmenting inputs with additional knowledge sources <cit.>, and measuring faithfulness of generated outputs <cit.>. We encourage reporting the faithfulness/factuality of models. ∙ Toxic degeneration Language models can also induce unintendedly biased and/or toxic content, regardless of whether explicit prompts are used <cit.>. In the use case of counterspeech generation, this can result in harm to victims and bystanders as well as risking provoking perpetrators into further abusive behaviour. This issue has been mitigated by two approaches: data and modelling. The data approach aims at creating proper datasets for fairness by removing undesired and biased content <cit.>. The modelling approach focuses on controllable generation techniques that, for instance, employ humans for post-editing <cit.> and detoxification techniques <cit.>. ∙ Generalisation vs. Specialisation With the rise of online hate, models that can generalize across domains would be helpful for producing counterspeech involving new topics and events. Generalisable methods can also ameliorate the time and manual effort required for collecting and annotating data. However, as discussed in Section <ref>, counterspeech is multifaceted and contextualised. There may not be a one-size-fits-all solution. For instance, abuse against women can often be expressed in a more subtle form as microaggressions. It may, therefore, be difficult to implement an easy yet effective counterspeech strategy in one model. Moreover, model generalisability is challenging <cit.>, and can have potential limitations <cit.>. Finding the right trade-off between generalisation and specialisation is key. § FUTURE PERSPECTIVES Of the many promising abuse intervention experiments that we review, results are not always consistent, demonstrating weak claims or limited success (applicable only to certain settings). Possible reasons include short-term experiments, small sample sizes and non-standardised experimental designs. To improve this, effective interventions should come with the characteristics of scalability, durability, reliability, and specificity. In this section, we highlight key distinctions and overlaps across areas that have and have not been explored in social sciences and computer science, discuss ethical issues related to evaluating counterspeech in real-life settings and automating the task of counterspeech generation, and identify best practices for future research. Distinctions and overlaps across areas By recognizing the commonalities and differences between social sciences and computer science, we pinpoint the unique contributions of each discipline and encourage interdisciplinary collaborations to address complex societal challenges and better understand human behaviour with the help of computational systems. ∙ Terminological clarity. Throughout the counterspeech literature, terminology is used inconsistently. Terms such as counterspeech and counter-narratives are often used interchangeably or used to refer to similar concepts. In social science, counterspeech is used to refer to content that disagrees with abusive discourses and counter-narratives often entail criticism of an ideology with logical reasoning. As a result, counter-narrative stimuli designed in social experiments are generally long form <cit.>. In computer science on the other hand, the distinctions between counterspeech and counter-narratives have been vague, and training data is generally short form (while this may be bound by character limit on social media platforms). For instance, short and generic responses such as `How can you say that about a faith of 1.6 billion people?' can be commonly found in counter-narrative datasets <cit.>. ∙ The focus of evaluation. Social scientists and counterspeech practitioners generally attempt to understand and assess the impact of counterspeech on reducing harms (e.g., which strategies are effective and public perception towards counterspeech), whereas computer scientists focus more on technical exploration of automated systems and testing their performance in producing counterspeech (e.g., comparing system outputs with a pre-established ground truth or supposedly ideal output). One commonality between the social science and computer science studies is that most findings are drawn from controlled and small-scale studies. Applying interventions to real-world scenarios is a critical next step. ∙ Datasets. Dataset creation is an important component in computer science for developing machine learning models for generating counterspeech, while such contributions are less commonly considered in social sciences which rely on experiments using hand-crafted stimuli and one-time analyses of their effectiveness. ∙ Scope of research. We observe that, while computer scientists have focused on responses to abusive language and hate speech, social science studies address a wider range of phenomena, in particular radicalisation and terrorist extremism. It can be difficult to measure the effectiveness of counterspeech in challenging these over the short term, leading to some of the differences in evaluation metrics across disciplines. ∙ Lack of standardised methodologies. A variety of methodologies have been adopted in the literature, making comparisons across studies difficult. Without standardised evaluations, it is difficult to situate the results and draw robust findings. Ethical Issues, Risks and Challenges of Conducting Counterspeech Studies Effective evaluation of counterspeech not only identifies users who may need help, but also safeguards human rights and reinforces a stronger sense of responsibility in the community. This discussion is based on the authors' opinion and not stemming from the review. ∙ Evaluating counterspeech in real-life settings Conducting the evaluation of counterspeech in real-world scenarios appears to provide a proactive and quick overview of its performance on hate mitigation. However, from an ethical perspective, the debate surrounding it is ongoing and reaching an agreement can be difficult. For instance, one side argues about the morality of exposing participants to harm, while another points to the importance of internet safety. Exercising counterspeech can offer mitigation of online abuse in good faith and may be exempt from liability based on several legal groundings. As an example, Good Samaritan laws provide indemnity to people who assist others in danger <cit.>. In 2017 the EU Commission released a communication on tackling illegal content online, stating that `This Communication ... aims to provide clarifications to platforms on their liability when they take proactive steps to detect, remove or disable access to illegal content (the so-called “Good Samaritan” actions)' <cit.>. Section 230(c)(2) of Title 47 of the United States Code extents this protection to the good faith removal or moderation of third-party material they deem “obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.” and stresses liability towards online hate speech. It protects online computer services from liability for moderating third-party materials that are harmful <cit.>. The aim of these safeguards is to ensure that individuals are not hesitant to help others in distress due to the fear of facing legal consequences in case of unintentionally making errors in their efforts to provide support. Responsible open-source research can facilitate reproducibility and transparency of science. Recently, reproducible research has been deemed critical in both social sciences <cit.> and computer science, and low replication success is found despite using materials provided in the original papers <cit.>. To tackle this issue, a few initiatives for transparent research have been proposed, advocating researchers to state succinctly in papers how experiments are conducted (e.g., stimuli, mechanisms for data selection) and evaluated, including A 21 Word Solution <cit.> and Open Science Framework.[<https://osf.io/>] Furthermore, practising data sharing encourages researchers to be responsible for fair and transparent experimental designs, and to avoid subtle selection biases that might affect substantive research questions under investigation <cit.>. At the same time, when handling sensitive or personal information, data sharing should adhere to research ethics and privacy standards <cit.>. For instance, in the case of hate speech, using synthetic examples or de-identification techniques is considered a good general practice for ensuring the safety of individuals <cit.>. ∙ Automating counterspeech generation There are several ethical challenges related to automating the task of counterspeech generation. First of all, there is the danger of dual-use: the same methodology could also be used to silence other voices. Furthermore, effective and ethical counterspeech relies on the accuracy and robustness of detecting online hate speech: an innocent speaker may be publicly targeted and shamed if an utterance is falsely classified as hate speech – either directly or indirectly as in end-to-end response generation. For example, Google's Jigsaw API <cit.>, a widely used tool for detecting toxic language, makes predictions that are aligned with racist beliefs and biases—for example it is less likely to rate anti-Black language as toxic, but more likely to mark African American English as toxic <cit.>. It is thus important to make sure that the underlying tool is not biased and well-calibrated to the likelihood that an utterance was indeed intended as hate speech. For example, the `tone' of counterspeech could be used to reflect the model's confidence. A related question is free speech: what counts as acceptable online behaviour, what sort of speech is deemed inappropriate, in which contexts, and should be targeted by counterspeech? A promising direction for answering this complex question is participatory design to empower the voices of those who are targeted <cit.>. In sum, there is a trade-off between risks and benefits of counterspeech generation. Following the `Good Samaritan' law: automating counterspeech provides timely help to victims in an emergency which is protected against prosecution (even if it goes wrong). Similar legislation is adopted by other countries, including the European Union, Australia and the UK. Under this interpretation, well-intentional counterspeech (by humans and machines) is better than doing nothing at all. Best practices We provide best practices for developing successful intervention tools. * Bear in mind practical use cases and scenarios of hate-countering tools. A single intervention strategy is unlikely to diminish online harm. To design successful counterspeech tools, it is important to consider the purposes of counter messages (e.g., support victims and debunk stereotypes), the speakers (e.g., practitioners, authorities and high-profile people), recipients (e.g., ingroup/outgroup, political background and education level), the content (e.g., strategy, style, and tones), intensity (e.g., one message per week/month), and the communication medium (e.g., videos, text, and platforms). * Look beyond automated metrics and consider deployment settings for evaluating the performance of generation systems. Generation systems are generally evaluated on test sets in a controlled environment using accuracy-based metrics (e.g., ROUGE and BLEU) that cannot address social implications of a system. Drawn from social science studies, metrics assessing social impact (e.g., user engagement), behavioural change (e.g., measure abuse reduction in online discourse) and attitude change (e.g., through self-description questionnaires) can be considered. A good intervention system is expected to pertain long-lasting effects. * Be clear about the methodology employed in experiments, open-source experimental materials (e.g., stimuli, questionnaires and codebook), and describe the desirable criteria for evaluating counterspeech intervention. As standardised procedures are not yet established for the assessment of counterspeech interventions, examining the impact of interventions becomes difficult. A meaningful description of experimental design would therefore enhance reproducible research and help capture the limitation of existing research. * Establish interdisciplinary collaboration across areas such as counter-terrorism, political science, psychology and computer science. AI researchers can help guide policymakers and practitioners to, for instance, identify long-term interventions by performing large-scale data analysis using standardized procedures on representative and longitudinal samples. With expertise in theories of human behaviour change and experimental design, social science researchers can conduct qualitative evaluations of AI intervention tools in real-life scenarios to understand their social impact. § CONCLUSION Online hate speech is a pressing global issue, prompting scientists and practitioners to examine potential solutions. Counterspeech, content that directly rebuts hateful content, is one promising avenue. While AI researchers are already beginning to explore opportunities to automate the generation of counterspeech for the mitigation of hate at scale, research from the social sciences points to many nuances that need to be considered regarding the impact of counterspeech before this intervention is deployed. Taking an interdisciplinary approach, we have attempted to synthesize the growing body of work in the field. Through our analysis of extant work, we suggest that findings regarding the efficacy of counterspeech are highly dependent on several factors, including methodological ones such as study design and outcome measures, and features of counterspeech such as the speaker, target of hate, and strategy employed. While some work finds counterspeech to be effective in lowering further hate generation from the perpetrator and raising feelings of empowerment in bystanders and targets, others find that counterspeech can backfire and encourage more hate. To understand the advantages and disadvantages of counterspeech more deeply, we suggest that empirical research should focus on testing counterspeech interventions in real-world settings which are scalable, durable, reliable, and specific. Researchers should agree on key outcome variables of interest in order to understand the optimal social conditions for producing counterspeech at scale by automating its generation. We hope that this review helps make sense of the variety of types of counterspeech that have been studied to date and prompts future collaborations between social and computer scientists working to ameliorate the negative effects of online hate. § ACKNOWLEDGEMENTS We thank Bertie Vidgen for the valuable feedback on the initial structure of this manuscript and Hannah Rose Kirk for her help with the collection of target literature. apalike
http://arxiv.org/abs/2307.02317v1
20230705141924
The Giroux correspondence in arbitrary dimensions
[ "Joseph Breen", "Ko Honda", "Yang Huang" ]
math.SG
[ "math.SG", "57R17" ]
<cit.> epitext#1epitext#1 theoremTheorem[section] conjtheorem cortheorem lemmatheorem facttheorem claimtheorem proptheorem defntheorem assumptheorem questiontheorem notationtheorem conventiontheorem conj[conj]Conjecture cor[cor]Corollary lemma[lemma]Lemma fact[fact]Fact claim[claim]Claim prop[prop]Proposition defn[defn]Definition question[question]Question notation[notation]Notation convention[convention]Convention assump[assump]Assumption equationsubsection theoremsubsection figuresubsection conj cor lemma fact claim prop defn question notation convention assump [1]@@@#1@italiccorr remark rmktheorem example[rmk]Example remark[rmk]Remark rmk remark examtheorem exam[exam]Example exam   || ‖‖
http://arxiv.org/abs/2307.00791v1
20230703071543
Proof of avoidability of the quantum first-order transition in transverse magnetization in quantum annealing of finite-dimensional spin glasses
[ "Mizuki Yamaguchi", "Naoto Shiraishi", "Koji Hukushima" ]
quant-ph
[ "quant-ph", "cond-mat.dis-nn", "cond-mat.stat-mech" ]
Proof of avoidability of the QFOT in QA of finite-dimensional spin glasses]Proof of avoidability of the quantum first-order transition in transverse magnetization in quantum annealing of finite-dimensional spin glasses [1] Mizuki Yamaguchiyamaguchi-q@g.ecc.u-tokyo.ac.jp 1]Naoto Shiraishi 1]Koji Hukushima [1]Graduate School of Arts and Sciences, The University of Tokyo, Meguro-ku 3-8-1, Tokyo, 1530041, Japan It is rigorously shown that an appropriate quantum annealing for any finite-dimensional spin system has no quantum first-order transition in transverse magnetization. This result can be applied to finite-dimensional spin-glass systems, where the ground state search problem is known to be hard to solve. Consequently, it is strongly suggested that the quantum first-order transition in transverse magnetization is not fatal to the difficulty of combinatorial optimization problems in quantum annealing. [ [ August 1, 2023 ================== § INTRODUCTION Solving combinatorial optimization problems efficiently is a major topic in theoretical computer science. From the viewpoint of physics, this problem can be described as energy minimization with a given classical spin system. Inspired by this, the simulated annealing <cit.> and the quantum annealing (QA) <cit.> were invented as generic heuristic methods for solving these optimization problems. Given a classical Hamiltonian of system size N and for the set of quenched coupling constants J, the QA solves the energy minimization problem as follows: we set the Hamiltonian of the quantum system as = + D̂ where D̂ is noncommutative with and γ is a parameter controlled in QA. This Hamiltonian D̂ is called the driver Hamiltonian and has a known ground state. Starting with the system with large and varying slowly to =0, one would expect the ground state of to change from the known ground state of D̂ to the desired solution state, the ground state of . The quantum adiabatic theorem <cit.> guarantees that the desired ground state is attainable by taking a sufficiently long time to change . A natural question raised here is whether this process is time efficient, i.e., whether it takes only polynomial time of the system size N to reach the true ground state of . Roughly speaking, the time cost is proportional to the inverse square of the minimum gap between the ground energy and the first excited energy of the Hamiltonian in the annealing process <cit.>. The first proposal of QA employed the transverse magnetic term - = - σ̂_i^x as the driver Hamiltonian D̂, and argued based on numerical experiments that the QA with this driver Hamiltonian is superior to simulated annealing <cit.>. On the other hand, this type of QA was shown to fail in the p-spin model (p-body mean-field ferromagnetic model with p>2) <cit.>, where the gap is exponentially small and the computation time is exponentially large. In the case of the p-spin model, the transverse magnetization undergoes a discontinuous jump in the annealing process, which we call the quantum first-order transition in transverse magnetization (QFOT for short) analogous to the first-order transition in thermodynamics. This is a challenging phenomenon for QA, since the sudden change in the transverse magnetization causes an exponential collapse of the gap, implying the failure of annealing. The fundamental origin of the failure of the QA for hard optimization problems is under discussion in this field. One possible argument is that the failures mainly are due to the QFOT <cit.>, based on the observed fact that the QFOT frequently appears when the QA fails<cit.>. In contrast, another argument is that the failure of the QA has origins other than the QFOT. For example, it was reported in Ref <cit.> that QA in some models shows exponentially small gaps at points other than the point of QFOT. However, the latter observation is based on specific models and does not provide a general argument on the relation between the QFOT and the failure of the QA. To resolve this controversy, we adopt the QA with antiferromagnetic fluctuations (QA-AFF) first proposed in Ref <cit.> and set the entire Hamiltonian as = - + ()^2. The additional fluctuations term makes the QFOT more avoidable. Previous studies based on specific models show that in the ferromagnetic p-spin model (without quenched randomness) and the Hopfield model, the QA-AFF succeeds in avoiding QFOT under some conditions, while the QFOT appears unavoidable under others <cit.>. In spite of these previous investigations, the potential effectiveness of the QA-AFF in general systems has not yet been uncovered. In this paper, we rigorously prove that the QA-AFF for finite-dimensional spin-glass systems avoids the QFOT. Our result applies general finite-dimensional spin-glass systems as long as i.i.d. quenched random variables are used. Even for systems that exhibit the QFOT under the conventional QA with only the transverse magnetic field, the addition of the antiferromagnetic fluctuations term always removes this singularity and makes the transverse magnetization in this QA continuous as a function of . In other words, the QFOT in QA can be completely avoided by adding antiferromagnetic fluctuations. We note that the search for the ground state of three-dimensional Ising spin-glass systems is considered to be a computationally hard task since it belongs to an problem <cit.>, which is a class of the most difficult combinatorial optimization problems. It is believed that even quantum computers cannot solve problems efficiently, which suggests that the QA-AFF in fact fails at some point. Based on our findings, we assert that the QFOT is not fatal to the difficulty of combinatorial optimization problems in QA. In our proof, the self-averaging plays a pivotal role to derive the absence of singularity. First, we show that the ground energy is self-averaging in the conventional QA only with the transverse magnetic field. There remains the possibility that the function obtained by taking the average of the quenched randomness has a singularity (i.e., indifferentiable points). Then, it can be shown that the addition of the antiferromagnetic fluctuations term does indeed remove the singularity for any finite-dimensional system. However, for some long-range interaction systems, this claim may not hold, which is consistent with the previous studies showing that QFOT cannot be avoided in the mean-field models. This paper is organized as follows. In Section <ref>, we explain our setup and main claim. In Section <ref>, we describe the outline of the proof, presenting several key ideas in this proof. In Section <ref>, which consists of four subsections, we prove the main theorem. We briefly review the Legendre transformation in Subsection <ref>. Subsection <ref> and <ref> are devoted to the investigation of self-averaging for a fixed parameter and uniform self-averaging for a function, respectively. We finally introduce the antiferromagnetic fluctuations in Subsection <ref>, which completes the proof of the avoidance of the QFOT. § SETUP AND MAIN CLAIM §.§ Setup We deal with the energy minimization problem of a classical spin-1/2 system on a finite-dimensional lattice with N spins. Neighboring spins σ̂_i^z and σ̂_j^z interact with each other through the coupling constant J_ij, which is a quenched random variable. The Hamiltonian of this classical system is thus expressed as = - ∑_⟨i,j ⟩ J_ij σ̂_i^z σ̂_j^z where the set of coupling constants J_ij is denoted simply by J. To solve the energy minimization problem by QA, we add a transverse magnetic field with strength γ∈ (-∞,∞), which leads to the following Hamiltonian for the quantum system: = - = - σ̂_i^x As discussed in detail in Sec. <ref>, dealing with the QA-AFF, in this formula is replaced by + /N ()^2. Throughout this paper, we consider models in finite dimension with generated by a shift-invariant probability distribution for quenched random variables. We here clarify the meaning of finite dimension for later use. If the lattice is placed in ℝ^n space and each site interacts only with its neighbors, the notion of dimension has no confusion. The problem may arise when distant sites also interact with forces that decay with distance. To cover these systems, we define the finite dimensionality for D≥ 2 with D being the spatial dimension as follows[ In one dimension, the same argument holds but the order evaluation of the result differs. ]: [Finite dimensionality] We say that a Hamiltonian (or a system) is in a finite dimension if the following two conditions are satisfied: * For any site i, the sum of interactions with i is bounded from above as √([J_ij^2]) ≤ where [· ] is the random average with respect to the quenched randomness (see Subsection <ref> for details), and is a constant independent of the system size N. * For any size A of the system, there exists a decomposition of sites into B = N/A subsystems of size A denoted by A[1], …, A[B], such that the sum of interactions between inside and outside of any subsystem A[k] is bounded from above as ∑_i ∈s(A[k]) ∑_j ∉s(A[k])√([J_ij^2]) ≤A^1-1/D Here s(A[k]) is the set of the sites in A[k], and is a constant independent of the size of the system and the subsystems. We also clarify the meaning of shift-invariant probability distribution for quenched random variables. Let P_ij(J_ij) be the probability distribution for the coupling constants J_ij. This probability distribution is shift-invariant if P_ij=P_kl holds for any i,j,k,l such that 𝐫_i-𝐫_j=𝐫_k-𝐫_l, where 𝐫_i is a D dimensional vector representing the lattice position of site i. If the lattice consists of several sublattices, the above characterization further requires that i and k belong to the same sublattice. The system without quenched randomness is considered a special case of a shift-invariant system. §.§ Main result We shall define the quantum first-order transition in transverse magnetization (QFOT) in systems with quenched random variables. We first provide the definition of the absence of the QFOT for a system without quenched randomness, i.e., is deterministically constructed depending on N. In this case, we say that the QA does not suffer the QFOT if the transverse magnetization density g()/N of a ground state |g()⟩ of converges uniformly to a continuous function m_*() in thermodynamic limit (N →∞): lim_N →∞ sup_∈[0,∞) g()/N - m_*() = 0 Here, the ground state of is implicitly assumed to be unique. If the ground state degenerates, we regard that our condition should be satisfied for any state |g⟩ in the state space of minimum energy: G() = _|ψ⟩ψ. Or equivalently, the above definition can be replaced by lim_N →∞ sup_∈[0,∞) max_ |g⟩ ∈G() g/N - m_*() = 0 In the case of spin glasses with quenched randomness, we shall define the QFOT as its typical behavior, replacing the convergence in the above definition with stochastic convergence (especially convergence in mean square). We say that a given QA shows no quantum first-order phase transition in transverse magnetization if there exists a continuous function m_*() such that lim_N →∞ sup_∈ [0,∞)g()/N - m_*() ^2 = 0 Here, [· ] means the average for the quenched random variable. We remark that the function m_*() is independent of the quenched random variables. Thus, the above definition states that for almost all the transverse magnetization density in the ground state converges to the same function. We prove that the QFOT in the above definition is always avoidable in finite-dimensional spin-glass systems. For any classical Hamiltonian in finite dimension with quenched random variables sampled from a shift-invariant probability distribution, the QA-AFF for this Hamiltonian does not have the QFOT. This is the main result of this paper. Notice that this theorem only describes the absence of a jump in the transverse magnetization density during the QA process, and does not evaluate the size of the energy gap that determines the annealing time required for successful computation. §.§ Remark We here put two remarks related to our main result. The first remark is on the connection to the computational hardness, in particular computational complexity. It is known that the energy minimization problem in three-dimensional spin glass is an problem. Thus, from the perspective of computational complexity, the finite-dimensional spin glass with D≥ 3 is as complex as the Sherrington-Kirkpatrick model. Therefore, our subject is indeed the efficiency of QA for hard combinatorial optimization problems. This paper rules out the possibility that the difficulty in QA for finite-dimensional spin glasses lies in the QFOT. Our result, however, does not state that the QA-AFF actually succeeds as a method for solving combinatorial optimization problems and that ⊆ in terms of computational complexity. A more plausible scenario would be that the QA-AFF suffers from causes other than the QFOT. We will discuss this point in detail in Section <ref>. If we apply a constant time per , then QA-AFF takes infinite time to execute, since is defined on [0, ∞). So, if we take _QA-AFF (t;) = t/T + ( )^2 - 1 - t/T scheduling in t ∈ [0,T] is accomplished. Here, _QA-AFF (t;) is proportional to (T-t/t; ), so the existence of QFOT, etc. are perfectly common. The second remark is on the form of in QA-AFF. In QA-AFF, the Hamiltonian at = 0 is = + /N ()^2, not the classical spin glass Hamiltonian we wish to solve. However, this discrepancy does not matter for the success or the failure of the QA for the following reason: if the coupling constants do not take continuous real numbers but discrete numbers of decimal places, by setting smaller than the smallest unit of energy and measuring the final state with the computational basis, we can observe one of the true ground states with a finite probability. In fact, a three-dimensional spin glass with integer coupling constants is a similarly difficult sort of problem and has been proven to be . § OUTLINE OF THE PROOF Our first simple but important step is inspired by thermodynamics, where thermodynamic functions (e.g., the Helmholtz free energy and the Gibbs free energy) are connected via the Legendre transformation with respect to an extensive variable (e.g., volume) and an intensive variable (e.g., pressure). We regard the ground state energy as a thermodynamic function with an intensive variable . Then, its inverse Legendre transformation <cit.> yields the constrained minimum energy with an extensive variable M_x. The functional forms of and depend on . However, if the probability distribution generating is shift-invariant, we can show that /N ∼ /N ∼ where ∼ stands for stochastic convergence (especially convergence in mean square), which is nothing but self-averaging in terms of physics. In the conventional QA only with a transverse magnetic field, - and are shown to be convex, while might be not strictly convex, which leads to a singularity as an indifferentiable point in at the QFOT. To remove this singularity, we add the antiferromagnetic fluctuations term to the Hamiltonian , which results in = - + /N ( )^2. In this case, the corresponding is strictly convex, since is convex and the square function m_x^2 provides a small convex curve. Consequently, it follows that the QFOT does not occur in a typical evaluation with the addition of the antiferromagnetic fluctuations term. The effort for this proof is mainly devoted to proving self-averaging. In this paper, we use the finite dimensionality for the proof, while some studies employ other methods <cit.>. To show the self-averaging of , we decompose the system into B = N/A subsystems of size A and resort a kind of the law of large numbers recalling that these subsystems are i.i.d. Subsequently, we show the uniform self-averaging (uniform convergence) of . The assertion of the main theorem <ref> is described as uniform self-averaging of the transverse magnetization. § PROOF §.§ Preparation We introduce two functions and analogous to thermodynamic functions and demonstrate that they are Legendre transformation and inverse Legendre transformation <cit.> of each other. For completeness, below we will describe several basic results of the Legendre transformation in terms of and . Readers who are familiar with these techniques can skip this subsection. The ground state energy of is defined as = min_|ψ⟩ ψ Here, |ψ⟩ runs all possible pure states. The domain of minimization in the above definition can be extended from pure states to general mixed states. The ground state energy is also the minimum expectation energy of general mixed states: = Fix ' ∈. Decomposing this state as ' = ∑_t p_t |ψ_t⟩⟨$|, we obtain = ' = ∑_t p_t ψ_t ≥ The inverse inequality is obvious. We next introduce the minimum energy conditioned by the transverse magnetization. The minimum energy of conditioned by x-magnetization at M_x= is defined as = Notably,andare connected through the Legendre transformation. is the Legendre transformation of . Combining the definitions of and , we find = min_M_x ∈ [-N,N] ( - ) = min_M_x ∈ [-N,N] ( - M_x) This means the Legendre transformation of in terms of M_x. is a concave function. The Legendre transformation provides a concave function. is a convex function. For any λ ( 0 ≤λ≤ 1), M_- , M_+ (M_- < M_+), we fix _+ ∈_ρ̂|ρ̂M̂_x^N = M_+ and _- ∈_ρ̂|ρ̂M̂_x^N = M_-, which are density matrices minimizing under the constraint that the x-magnetization is M_±, respectively. Then, putting M(λ):=(1-λ) M_- + λ M_+, we have (1-λ) (M_-) + λ(M_+) = ((1-λ) _- +λ_+) ≥min_| = M(λ) = ( (1-λ) M_- + λ M_+) which means the convexity of . is the inverse Legendre transformation of . It is known that if a function f is the Legendre transformation of a convex function g, then the inverse Legendre transformation of f is g <cit.>. We remark that although the domain ofM_xis a finite region[0,N]and that ofγis a semi-infinite region[0, ∞), all the aforementioned results are valid for these domains. Even if is defined on [0, ∞) and is defined on [0,N], Legendre transformation and inverse Legendre transformation hold. That is, for any ∈ [0 , ∞), we have = min_M_x ∈ [0,N] ( - M_x) and for any M_x ∈ [0 , N], we have = sup_γ∈ [0, ∞) ( + M_x) Since is a classical Hamiltonian, we can take a classical (computational basis) ground state, so (0) = min_M_x = (0) and ≥(0) - 0 ≥ Thus, in order for the relation = - M_x to hold, M_x ≥ 0 is required. §.§ Self-averaging In this subsection, we shall show self-averaging of the ground energy /N∼ with respect to quenched randomness. Namely, almost all Hamiltonians obtained by random quench have the same ground energy density. Self-averaging allows us to discuss quenched systems only by considering the averaged quantity, not each. To this end, we divide the system with HamiltonianintoBcopies of subsystems with equal sizeAasN=AB. We denote thek-th subsystem byA[k] . We define the Hamiltonian ofA[k]denoted byas a restriction ofto subsystemA[k]with removing all the bonds fromA[k]to outsideA[k]. We introduce a block-decomposed Hamiltonian on the same system, which is a product ofdenoted by:=⊗_k . The difference betweenandis the interaction terms between different subsystems. The block-decomposed Hamiltonianplays a central role in our proof. In particular, an argument similar to the law of large numbers is applicable to, which follows from the fact thatis a product of i.i.d. random Hamiltonians;. Sinceis close to, we can derive several self-averaging results in systems with. Note that this proof idea is a standard technique in the statistical mechanics of random systems (see Ref <cit.>). We first introduce symbols describing an average over quenched randomness and its fluctuation: Consider a system with quenched random variables J. Let X^ be a stochastic variable depending on the quenched random variables J. We denote by [ X^ ] the average of X^ with respect to the quenched randomness J. We also denote its root mean square √( [ (X^)^2 ] ) by X^. Note thatX^is a norm (i.e., it satisfies the triangle inequality), which is a direct consequence of Schwarz inequality. We first bound the root mean square of operator norms of the system Hamiltonianand the difference between the Hamiltonian and its block-decomposed one;- . The latter quantity can be regarded as surface energy from the viewpoint of⊗_k A[k]. Suppose that are i.i.d. random Hamiltonians of D-dimensional systems. Then, the operator norm of the bulk energy and the surface energy - are bounded respectively as ≤ N - ≤ B where and are constants independent of N, A, and B. These bounds are direct consequences of the finite dimensionality of the system. The bulk energy is bounded as ≤J_ij≤J_ij≤ N where we used the triangle inequality in the second inequality. The surface energy is bounded as - ≤∑_i ∈ s(A[k])∑_j ∉ s(A[k])J_ij ≤∑_i ∈ s(A[k])∑_j ∉ s(A[k])J_ij ≤ B where s(A[k]) is a set of sites in subsystem A[k]. Now we shall bound the fluctuation of the ground energy in terms of quenched randomness. We first show a slightly weak inequality, and then tighten the inequality by applying the obtained inequality iteratively. The standard deviation of is evaluated as - [ ]≤ N Hgg-weak1 For any Hamiltonian , we have the following bound: + N = [] - [ ( - )] ≤ + = Here, the second line follows from an elementary inequality X̂ - Ŷ≤X̂ - Ŷ Thus we arrive at the desired inequality: - [ ] = + N -[ + N ] ≤ + N≤≤ N where the first inequality follows from an elementary fact that X-x_0 is minimized when x_0=[X], and the last inequality follows from Proposition <ref>. The above inequality can be tightened by applying the above result to the block-decomposed systemA # Biteratively. The fluctuation of /N vanishes in the thermodynamic limit. In particular, for any positive >0, we have - [] = O( N^1 - 1/ D + ) We start with - [] ≤ - [] ≤ - + - [ ] varHgg-mid1 The first inequality follows from that X-x_0 is minimized when x_0=[X], and the second inequality follows from the triangle inequality. We first evaluate the first term of the right-hand side of (<ref>) as - ≤ - = - ≤ B Here, we used (<ref>) in the first inequality, and used Proposition <ref> in the last inequality. We next bound the second term of the right-hand side of (<ref>). Since E_g^ are independent random variables, Proposition <ref> applies to each subsystem, which yields - [ ] ^2 = E_g^ - [E_g^] ^2 ≤ B ^2 A^2 varHgg-mid2 Combining these two inequalities, we obtain - [ ] ≤ B + A B^1/2 Setting A=N^a and B=N^1-a with a = D/(D+2), we obtain - [ ] ≤ ( + ) N^1-1/(D+2) = O(N^1-1/(D+2)) Hgg-weak2 We notice that the above inequality (<ref>) is stronger than (<ref>). Therefore, by replacing (<ref>) in the derivation of (<ref>) by (<ref>) (i.e., we use E_g^ - [E_g^] = O( A^1-1/(D+2)) instead of E_g^ - [E_g^] = A = O(A) in (<ref>)), we can obtain a further stronger inequality on - []. By repeating this operation[Once E_g^ - [ E_g^ ] = O( A^1-n_m) is shown, we can get - [ ] = O(A^1-1/D) + O(A^1-n_mB^1/2) = O(N^1 - 1/D+2-2Dn_m) for a = D/D+2-2Dn_m. The recurrence formula n_m+1 = 1/D+2-2Dn_m with the initial term n_0=0 has a limit lim_m→∞n_m=1/max(D,2).], we finally arrive at - [ ] = O(N^1 - 1/D + ) An important corollary of Proposition <ref> is the existence of the ground energy density in the thermodynamic limit. The averaged ground state energy density converges in the thermodynamic limit :=lim_N →∞ [ ] /N Moreover, the speed of convergence is evaluated as [ ]/N - = O(N^-1/D) Since are i.i.d. random Hamiltonians, we have [ ] = B [ E_g^ ], which implies []/N - [E_g^]/A = [ ]/N - []/N≤ This shows that a_N:= [ ] /N is a Cauchy sequence and hence converges. We finally prove the self-averaging of the ground energy. For any , the ground energy density /N converges to in mean square: /N - = O(N^ - 1/D + ) Combining Proposition <ref> with Proposition <ref>, we easily have - N) ≤ - [ ] + [ ] - N ≤ - [ ] + N^1-1/D = O(N^1-1/D+) which is equivalent to the desired result. §.§ Uniform self-averaging In this subsection, we will show uniform self-averaging (i.e., self-averaging as a function ofγ), which is a stronger condition than the self-averaging discussed in the previous subsection. The key idea for the proof of uniform self-averaging is to put many regularity checkpoints on the-axis. To demonstrate self-averaging for any, we employ self-averaging at the nearest regularity checkpoint ofγand evaluate the speed of convergence atγ. Since it is not easy to show uniform self-averaging in the half-infinite region[0, ∞)directly, we set the domain ofγinas[0, ]with slowly diverging. We start by showing thatis Lipschitz continuous. For any _1 ,_2 and any instance, the difference between (and ) with _1 and _2 is bounded as (_1) - (_2) ≤ N _1 - _2 e_g(_1) - e_g(_2) ≤_1 - _2 The first inequality of (<ref>) follows from (<ref>) as (_1) - (_2) ≤(_1) - (_2) = N _1 - _2. The same argument holds for their mean and under the thermodynamic limit. The ground energy density /N converges uniformly on [0,] in mean square: /N - = O(N^-2/(5D)+) where we set = N^1/(5D). For convenience, we suppose that N^1/D+1/(5D) is an integer. Corresponding to integers w = 1, … , N^1/D+1/(5D), we define the regularity checkpoints and their covering intervals as _w = (w - 1/2) N^-1/D, I_w = (w-1) N^-1/D, w N^1/D With noting that - _w≤N^-1/D/2 for any γ∈ I_w, we have - N ≤ - (_w) + (_w) - N e_g(_w) + Ne_g(_w) - Ne_g() ≤ (_w) - N e_g(_w) + N^1-1/D where we used Proposition <ref> in the second inequality. Hence the maximum deviation of ground energy is bounded as max_∈ I_w - N≤ (_w) - Ne_g(_w) + N^1-1/Duniform_hgg-mid1 Using this relation, we arrive at the desired result: max_∈ [0,] - N^2 = max_w=1^N^1/D+1/(5D)max_∈ I_w - N^2 ≤∑_w=1^N^1/D+1/(5D)max_∈ I_w - N^2 ≤∑_w=1^N^1/D+1/(5D)(_w) - Ne_g(_w) + N^1-1/D^2 = O( N^2-1/D+1/(5D)+) In the first inequality we used the following simple relation for nonnegative L_w; max_w L_w^2 = max_w L_w^2 ≤ ∑_w L_w^2 = ∑_w L_w^2 in the second inequality we used (<ref>), and in the last inequality we used Proposition <ref>. We proceed to the uniform self-averaging of. To prove this, we introduce the inverse Legendre transform of, to which the ground energy density/Nconverges. The inverse Legendre transformation of is defined as = sup_∈ (-∞, ∞) ( + m_x) /N converges uniformly on 0, in mean square: - N = O(N^1-2/(5D)+) where = 1 - 2/N^1+1/(5D). Consider a pair (m_x, ) satisfying = + N m_x mx-gamma-rel Since is the Legendre transform of and its minimum is achieved at M_x=Nm_x, we have - Nm_x ≤(N) - N and thus ( 1 - m_x ) ≤(N) - /N≤2/N holds. We note that if m_x ≤, then the corresponding γ with (<ref>) satisfies ≤=N^1/5D. Hence, the domain of γ in the maximization in the Legendre transform of and can be narrowed to [0,]: = ( + N m_x) = ( + m_x) Then, the difference between energy of a single instance and its average after taking the thermodynamic limit is evaluated as - N = ( + N m_x) - N ≤ ( N + N m_x) - N + ( - N ) ≤ - N Plugging Proposition <ref> into the above inequality, we have the desired result: - N ≤ - N = O(N^1-2/(5D) + ) §.§ Antiferromagnetic fluctuations Suppose that thex-magnetizationM_xshows the first-order phase transition (i.e., discontinuous jump) at someγ. At this pointis no longer differentiable, andis convex but not strictly convex. Our idea to avoid the first-order phase transition inM_x, based on the above observation, is adding a strictly convex function to. By construction, the modifiedis strictly convex, andhas no singularity. In particular, we add quantum antiferromagnetic fluctuations term()^2to the Hamiltonian, which we denote by. Correspondingly, we denote the minimum energy conditioned byM_xby. Then, Theorem <ref> suggests /N∼ := + m_x^2 Sinceis a strictly convex function, quantum first-order phase transition inM_xdoes not occur. A nontrivial step in the aforementioned proof outline is connecting ()^2 and^2, since these two are in general not equal; ()^2 ≠^2. In fact, these two are inequivalent in some long-range interacting systems (e.g.,p-spin model with largep, discussed in <cit.>). On the other hand, we can prove ()^2 ≃^2in short-range interacting systems. This is the main task in this subsection. We note that our argument does not hold for< 0. For ∈ ( 0, ∞), we introduce Hamiltonians and related quantities corresponding to QA-AFF denoted by = + ( )^2 = + ( )^2 = min_| = M_x = + m_x^2 We first prove the uniform self-averaging of. The ground energy density /N converges uniformly to on [0,] in mean square: max_m_x ∈ [ 0 , ]/N - = O(N^-2/(5D)+) We first derive the following bound: - - N m_x^2≤ A U-a-eval An elementary inequality ()^2 - ()^2 ≥ 0 implies ≥ + N m_x^2 Fix ' ∈, which has x-magnetization as Nm_x and minimizes the energy . We construct a state from ' by removing all the correlation between subsystems: _k = _i ∉ s(A[k])' _⊗ = ⊗_k=1^B _k By construction, _⊗ is separable into subsystems, and _⊗ also minimizes with x-magnetization as Nm_x: _⊗∈. The AFF term in _⊗ can be directly evaluated as _⊗ ( )^2 = _⊗ = _⊗_⊗ + ( _⊗ ( )^2 - ( _⊗ ())^2 ) ≤ N^2 m_x^2 + B A^2 which implies a relation evaluating the difference between with and without the AFF term: = _⊗ = _⊗(Nm_x;) - _⊗ ( )^2 ≥ - N m_x^2 - A Using these two inequalities, we arrive at (<ref>). Now we fix A = N^a' with a' = 1/(D+1) in (<ref>). Combining (<ref>), Proposition <ref>, and Proposition <ref> with recalling N=N +α Nm_x^2, we obtain the desired result: - N ≤ - + - - α N m_x^2 + - + - N ≤ - N + A + 2 B = O( N^ 1 - 2/(5D) + ) Here, the first inequality follows from the triangle inequality. We denote by m_*(;) the unique argument that minimizes - m_x. We also define M_*^(; ) as the expectation value of in a ground state of (i.e., g, where |g⟩ is a ground state of ). For brevity, we sometimes drop the arguments and in m_*(;) and M_*^(; ), and simply express m_* and M_*^. Sinceis a strictly convex function,m_*(; )is a continuous function[In fact, m_*(; ) is Lipschitz continuous with constant 1/(2), which means that there is no quantum second-order transition in transverse magnetization either.] with respect tofor any>0. We shall show thatM_*^N:J/Nconverges to a continuous functionm_*(; ), which completes the proof of our main result. M_*^N:J/N (as a function of ) converges uniformly on [0,∞) in mean square: sup_∈ [0,∞)max_|g⟩∈ G()g/N - m_*(; ) = O(N^-1/(5D) + ) main-ineq Note that 1- = 2/N^1+1/(5D)≤2/N^1/(5D). We decompose the domain of M_x, [0,N], into two regions, I_1:=[0 , N] and I_2:=[N, N]. Promising sup∅ = 0 for convenience, we evaluate the square of the left-hand side of (<ref>) (multplied by N) as sup_∈ [0,∞)max_|g⟩∈ G()g - Nm_*(; ) ^2 ≤ M_*^ - Nm_* ^2 + M_*^ - Nm_* ^2 + M_*^ - Nm_* ^2 + M_*^ - Nm_* ^2 last-max Here, we used (<ref>). We shall evaluate these four terms. Before going to the evaluation, we introduce a useful relation that if functions P,Q,R,S,T satisfy P ≤ Q and S + T^2≤ R, then maxT^2 ≤max P-R + max Q-S is satisfied. This relation is easily confirmed as maxT^2 ≤ [ max R - S ] ≤ [ max ( P-R + Q-S )] ≤ [ max P-R ] + [ max Q-S ] ≤max P-R + max Q-S Now we evaluate the four terms in (<ref>). To evaluate the first term of (<ref>), we use (<ref>) with P = (M_*^ ; ) - M_*^ Q = (Nm_* ; ) - N m_* R = Nu_gM_*^/N; - M_*^ S = Nu_g(m_*; ) - N m_* T = ^1/2 M_*^ - Nm_* which reads M_*^ - Nm_* ^2 ≤ N/(M_*^; ) - Nu_gM_*^/N ; + N/(Nm_*; ) - Nu_g(m_* ; ) ≤ N/sup_, |g⟩| M_*^∈ I_1(M_*^; ) - Nu_gM_*^/N ; + N/sup_, |g⟩| Nm_* ∈ I_1(Nm_*; ) - Nu_g(m_* ; ) = O(N ^ 2 - 2/(5D) + ) In the last line, we used Proposition <ref>. To evaluate the second term of (<ref>), noting Nm_* ≤ N≤ M_*^, we use (<ref>) with P = (N ; ) - N Q = (Nm_* ; ) - N m_* R = Nu_g(; ) - N S = Nu_g(m_*; ) - N m_* T = ^1/2 M_*^ - Nm_* which reads M_*^ - Nm_* ≤ N-Nm_* + N(1-) = O(N^1-1/(5D)+) To evaluate the third term of (<ref>), noting M_*^≤ N≤ Nm_*, we use (<ref>) with P = (M_*^ ; ) - M_*^ Q = (N ; ) - N R = Nu_g(M_*^/N; ) - M_*^ S = Nu_g(; ) - N T = ^1/2 M_*^ - N which reads M_*^ - Nm_* ≤ M_*^ - N +N(1-) = O(N^1-1/(5D)+) The last term of (<ref>) is simply bounded as M_*^ - Nm_* ≤N ( 1 - ) = O(N^1-1/(5D)) Combining these four inequalities, we complete the proof of Theorem <ref>. § DISCUSSION We have proved that the quantum annealing (QA) for finite-dimensional spin-glass systems does not show the quantum first-order transition in transverse magnetization (QFOT) by adding the antiferromagnetic fluctuations (AFF) term. This result holds for any spin-glass system as long as the system is in finite dimension and its quenched randomness is sampled from a shift-invariant probability distribution. For simplicity of explanation, we assume that the interaction inis two-body and the boundary is an open boundary condition, but our result applies to more general systems. In fact, our proof relies only on Proposition <ref> (finite dimensionality) and the fact that subsystems are i.i.d. Thus, if these two conditions are satisfied, our result also applies to systems with the periodic and closed boundary conditions as well as those with local fields and short-rangep-body interactions. Key ideas in our proof We first elucidate the power of uniform self-averaging. Uniform self-averaging is an important concept for discussing the absence of phase transitions. Applying an argument analogous to Chebyshev's inequality, we show that the functionM_*^()/Nis in thesup-norm neighborhood of the functionm_*()for almost allJ. Conventional self-averaging alone, which corresponds to pointwise convergence, cannot eliminate the possibility that there is a discontinuous jump in each instance with different transition points depending on instances. On the other hand, uniform self-averaging indeed prohibits this unwanted possibility. Next, we discuss the role of the AFF term. Thanks to the description with, our approach makes the meaning of the AFF term (()^2term) much clearer than in the original paper of QA-AFF <cit.>. Namely, the AFF term strengthens the convexity inand ensures that it is strictly convex, not merely convex, and this fact shows that the transverse magnetization is continuous with respect to. Similar arguments can be seen in some papers in statistical mechanics <cit.>, where the difficulties associated with first-order phase transitions are solved by devising the shape of the ensemble. However, we should notice the discrepancy ()^2 ≠^2, which prevents a direct analogous argument. In particular, the procedure to obtain a narrowly convex functionas presented in Section <ref> does not always work well for long-range interacting systems, e.g.,p-spin model with largep<cit.>. Implications to the hardness of QA It is numerically well known that the QA for hard combinatorial optimization problems fails at some point in the QA process. As explained in Introduction, the role of the QFOT in the failure of the QA is controversial. Our result says that the QFOT in QA for finite-dimensional spin glasses can be removed by adding the AFF term. We expect that the QFOT in any extensive sum of local observablesÂis avoidable by a slight modification of QA. If the observableÂdoes not containz-magnetizationσ̂^z, a slight extension of our argument leads to the desired consequence by simply adding the fluctuations termÂ^2/N. On the other hand, ifÂcontainsz-magnetization, our estimation⟨Â^2⟩=O(1)for the optimal solution no longer holds, and some additional ideas are necessary, which is left for future research. We emphasize that our result does not claim that the QA in finite-dimensional spin glasses succeeds and that the corresponding ground-state search problem can be efficiently solved. A more plausible scenario suggested by our result is that the QA in finite-dimensional spin glasses fails for different reasons from the QFOT. One candidate is the glassy bottlenecks, which are undetectable by the ordinal macroscopic observables. It is shown in Ref <cit.> that some models with a transverse magnetic field have exponentially small gaps in the glass phase rather than at the phase transition point. The arrangement of the ground state from one glassy state to another glassy state can make the adiabatic algorithm less efficient. Our result supports this picture. However, it should be clarified that two statements can be reconciled: (i) the ground-state search problem for finite-dimensional spin glasses is efficiently solvable by QA-AFF, and (ii) any quantum computer cannot solve NP-hard problems efficiently. This apparent contradiction is resolved for the following reason: Statement (i) concerns the average-case hardness, which means that almost all instances of spin glasses can be solved efficiently. In contrast, statement (ii) concerns the worst-case hardness, claiming that for any quantum computer, there exists at least one instance that cannot be solved efficiently. Hence, it is possible that the ground state search problem for finite-dimensional spin glasses, which is anproblem, is typically easy and rarely has hard instances. First-order transitions and the performance of quantum algorithms in random optimization problemsのconclusionが相手として良い。 横磁場以外の物理量への拡張。2つでも3つでも? 答えを与えるのはどうか。 生成確率が0に漸近して近い、量子1次相転移を起こすようなインスタンスの列があることは、排除していない 多少は改善するというかAFF使うことをお勧めするか。AFFの使用で得られる解の精度が向上するかどうか、数値計算による解析が望まれる。ただし、そもそも有限次元スピングラスは任意精度近似可能な問題であることに注意。任意精度近似可能な問題だから量子1次相転移が回避できた、というシナリオも排除はできない(少し興味深い)。 Acknowledgements We thank , , , and for valuable discussions. NS is supported by JSPS KAKENHI Grants-in-Aid for Early-Career Scientists Grant Number JP19K14615. KH is supported by JST Grant Number JPMJPF2221 and JSPS KAKENHI Grant Number 23H01095.
http://arxiv.org/abs/2307.02361v2
20230705152346
Constraints on Neutrino Self-Interactions from IceCube Observation of NGC 1068
[ "Jeffrey M. Hyde" ]
hep-ph
[ "hep-ph", "astro-ph.HE" ]
jhyde1@swarthmore.edu Department of Physics & Astronomy, Swarthmore College, Swarthmore PA 19081 USA The active galaxy NGC 1068 was recently identified by the IceCube neutrino observatory as the first known steady-state, extragalactic neutrino point source, associated with about 79 events over ten years. We use the IceCube data to place limits on possible neutrino self-interactions mediated by scalar particles with mass between 1 – 10 MeV. We find that constraints on flavor-specific ν_τ self-interactions with low mediator masses are comparable to constraints derived from the diffuse high-energy neutrino flux at low energies, while constraints on flavor-universal self-interactions are less restrictive than current bounds. Constraints on Neutrino Self-Interactions from IceCube Observation of NGC 1068 Jeffrey M. Hyde August 1, 2023 ============================================================================== § INTRODUCTION Neutrinos are an exciting component of multi-messenger astrophysics, as their weak interactions with baryonic matter can lead to inside views of distant and extreme phenomena that would otherwise be obscured from direct observation. The study of astrophysical neutrinos can also lead to insights about neutrinos themselves. In particular, neutrinos from distant sources pass through a background of Big Bang relic neutrinos and dark matter on their way to Earth. An appreciable cross section for neutrino self-interactions (νSI) or neutrino-dark matter scattering could alter the spectrum along the way, so the NGC 1068 detection allows for a test of such beyond-Standard Model interactions. This possibility was first examined with neutrinos from Supernova 1987a <cit.>; since then, some work has considered the ability of current and planned experiments to examine neutrino self-interactions <cit.>, motivated for example by cosmological implications <cit.>. Such interactions are constrained by Big Bang Nucleosynthesis (BBN), with laboratory experiments <cit.> extending some sensitivity to higher mediator masses. IceCube results are of special interest because neutrino energies above ∼ TeV allow mediator masses ≳ MeV to be probed with potentially greater sensitivity. Until recently, though, SN1987a remained the only identified point source of neutrinos. In 2017, IceCube identified a high-energy neutrino coincident with the direction and timing of the flaring blazar TXS 0506+056 <cit.>, and follow-up work found some evidence for neutrino emission from this source prior to the high-energy alert <cit.>. The blazar flare event has been used to place limits on neutrino self-interactions <cit.>, and the diffuse high-energy neutrino flux has also been used for this purpose <cit.>. Recently, IceCube reported a significant detection of about 79 excess neutrinos from the active galaxy NGC 1068 among data taken from 2011 and 2020, with a best-fit power law spectral index 3.2 <cit.>. For purposes of examining neutrino self-interactions, NGC 1068 has the advantage of a relatively large number of signal events and a spectrum that is assumed to be time-independent, in contrast with the flaring blazar. In contrast with SN1987a this signal is at higher energy (≳ 10^2 GeV vs. ∼1 – 10 MeV) allowing us to probe more massive mediators, it is more distant (at 14.4 Mpc versus 51.4 kpc), and it has a greater number of signal events (78 versus 22), possibly allowing more sensitivity to the coupling strength. However, the muon track events examined in point source searches such as this have weaker connection to the original neutrino energy. In this work, we use the data released in connection with the IceCube observation of NGC 1068 <cit.> to derive constraints on neutrino self-interactions. (We note that the TXS 0506+056 and NGC 1068 results have also been used to examine the possibility of neutrino-dark matter scattering <cit.>.) We find that a diagonal, flavor-universal self-interaction is not constrained beyond existing bounds, while constraints on ν_τ-ν_τ self-interactions are comparable to other bounds based on the diffuse neutrino flux at IceCube. sec:methods describes our methods for modeling neutrino self-interactions and evaluating statistics, sec:results describes our results, and in sec:conclusions we summarize, compare with existing bounds from other sources, and describe future opportunities. sec:energy-pdf describes in more depth how we modeled the energy pdf used in our likelihood analysis. § METHODS §.§ Modeling Neutrino Self-Interactions In our analysis we assume that the source produces a decreasing power-law neutrino flux spectrum Φ∝ E_ν^-γ over the range of observed energies, approximately 10^2 to 10^6 GeV, consistent with the assumption of the IceCube analysis. Neutrino self-interactions modify such a spectrum: in general, when a signal neutrino scatters, the outgoing neutrino will have diminished energy. For s-channel scattering, the cross section is largest near a resonant energy which depends on the mediator mass, and as a result a signal's power-law spectrum will have a “dip” near the resonant energy and an associated “pile-up” at lower energies. In practice, the pile-up at lower energies would be more difficult to observe, as the higher-energy part of the spectrum where they originate would have far fewer neutrinos than the lower-energy part of the spectrum where they end up. Therefore, in this work we focus on constraining dips in the spectrum from resonant s-channel scattering. More quantitatively, we consider an interaction of the form ℒ_ int = g ννϕ, where ϕ is a real scalar with mass m_ϕ. We examine two scenarios: flavor-diagonal coupling g: ℒ_ int = g ∑_i ν_iν_iϕ, and the less constrained scenario of self-interaction only among tau neutrinos: ℒ_ int = g ν_τν_τϕ. Big Bang Nucleosynthesis bounds rule out mediators with masses ≲ 0.1 to 1 MeV when g ≳ 10^-6 <cit.>. Other astrophysical and laboratory constraints are summarized in Ref. <cit.>, and the most relevant are also plotted along with our result in fig:constraints. First, we gain some intuition for the result by considering resonant scattering of one neutrino flavor with no oscillations, with cross section of the Breit-Wigner form σ = g^4/4πs/((s-m_ϕ^2)^2 + m_ϕ^2Γ^2, where s = 2Em_ν and the decay width is Γ = g^2 m_ϕ / (4π). Because NGC 1068 is located 14.4 Mpc from Earth (at redshift z = 0.003) the effect of cosmological expansion on neutrino energies and flux is negligible. The resonant energy is E_R = m_ϕ^2 / (2m_ν), and neutrino masses are bounded to have at least one state with mass in the range 0.05 - 0.1 eV. For this estimate we take m_ν≈ 0.1 eV or 10^-10 GeV. A neutrino energy range of 10^2 to 10^5 GeV then would correspond to mediator masses in the range of about 0.1 to 4.5 MeV. However, the muon energies obtained by IceCube are of somewhat lower energy than the incident neutrino (see sec:stat-analysis and sec:energy-pdf), and therefore somewhat higher mediator masses can be probed as well. In any case, this estimate shows that the NGC 1068 signal should be sensitive to mediator masses above those ruled out by the BBN bound. Whether the signal is also sensitive to couplings below existing bounds is a question to be resolved by the analysis described in sec:stat-analysis. We now generalize to the physical case of three flavors with mixing. The IceCube result <cit.> only infers intensity at Earth of the muon-neutrino flux originating from the source; in contrast we must assume an initial flavor composition due to flavor oscillations and scattering during propagation. AGNs are expected to produce neutrinos through photomeson decay dominated by pions <cit.>. The decays π^+ →μ^+ + ν_μ and μ^+ → e^+ + ν_e + ν_μ lead to an initial flavor ratio of ν_ e : ν_μ : ν_τ = 1:2:0, which we use as the initial relative composition of neutrinos plus antineutrinos. We note that the most common production scenarios all lead to a roughly 1:1:1 flavor ratio after oscillations, so unless there is a scenario which modifies this our results are weakly dependent on this assumption. The typical distances between source, intermediate scattering (if any), and detection at Earth are far greater than the neutrino wave packets' decoherence length, so the treatment of flavor oscillations is simplified. In terms of the initial muon neutrino/antineutrino flux Φ_ν_μ + ν_μ≡Φ_μ, the fluxes of mass eigenstates i = 1,2,3 are Φ_i = (0.5|U_ei|^2 + |U_μ i|^2)Φ_μ, where U_α i are elements of the neutrino mixing matrix; we use the 2022 global best-fit parameters from NuFit <cit.>. We typically specify the neutrino self-coupling matrix g_αβ in the flavor basis, but it will be computationally convenient to consider scattering in the mass basis: g_ij = ∑_α, β U_α i U_β j g_αβ. For our first scenario, g_αβ = g diag(1,1,1) = g 1_3, the mass-basis form g_ij is also diagonal since the 3× 3 identity 1_3 commutes with U. For our second scenario, g_αβ = g diag(0,0,1), there will be off-diagonal contributions that couple different mass-basis states. The generalization of the single-flavor Breit-Wigner cross section eq:breit-wigner for signal neutrino of mass state i scattering from background neutrino of mass state j is <cit.> σ(ν_i ν_j →νν) ≡σ_ij = ∑_k,l1/S_kl|g_ij|^2 |g_kl|^2/4πs_j/((s_j-m_ϕ^2)^2 + m_ϕ^2Γ^2, where s_j = 2 E_ν_i m_j and the decay width has been modified to Γ = ∑_i,j |g_ij|^2 m_ϕ / (4π). In eq:breit-wigner-3flavor we have summed over the outgoing flavors k and l, and the symmetry factor S_kl = 1 + δ_kl accounts for cases with identical particles in the final state. The number density of cosmic background neutrinos and antineutrinos is n = 112 cm^-3 per flavor, or 56 cm^-3 per flavor for neutrinos or antineutrinos. Based on the Planck 2018 upper bound on the sum of neutrino masses, ∑ m_ν < 0.12 eV <cit.>, we take the heaviest neutrino masses consistent with this and oscillation-based mass-squared differences: m_1 = 0.02 eV, m_2 = 0.0286 eV, m_3 = 0.0701 eV. The neutrino optical depth for mass state i traveling distance D from source to detector is τ_i = D/λ_i(E) = D ∑_j σ_ij(E) n_j, and the associated mass-basis neutrino flux is attenuated from its original value Φ^0_i(E) to become Φ_i(E) = exp(-τ_i(E)) Φ^0_i(E). Finally, the transformation Φ_μ(E) = ∑_i |U_μ i|^2 Φ_i(E) gives the flux spectrum of muon neutrinos incident at Earth. §.§ Statistical Analysis The IceCube observation of NGC 1068 reported in <cit.> is obtained from a dataset of 19,452 neutrino events within 15 degrees in right ascension (RA) and declination (DEC) of NGC 1068 (RA = 40.667^∘, DEC = -0.0067^∘), representing 3186 days of livetime from 2011 to 2020. These muon track events have good angular resolution and larger statistics, but at the cost of less knowledge of initial neutrino energy; overall, the dataset consists of atmospheric neutrino background plus possible signal. Assuming a power law signal Φ = Φ_0 (E/E_0)^-γ, a likelihood ratio test led to the quoted result of Φ_0 = (5.0 ± 1.5) × 10^-11 TeV^-1 cm^-2 s^-1 at E_0 = 1 TeV, γ = 3.2 ± 0.2, or 79^+22_-20 signal events. To examine the possibility of neutrino self-interactions, we consider the data to be comprised of atmospheric neutrinos along with a power-law spectrum from NGC 1068 which is modified as described in sec:flux-modeling. For data x_i (where x_i = { Energy, Declination, Ang. Unc.}≡{ E_i, δ_i, σ_i}) and model parameters θ_i (where θ_i = {γ, n_s, g, m_ϕ}) we take the likelihood function to be <cit.> ℒ( {x_i } | {θ_i} ) = ∏_i=1^N'[ n_s/N f_ signal(x_i | θ_i ) + ( 1 - n_s/N) f_ background(x_i) ], where the atmospheric neutrino spectrum is described by f_ background(E) and is identical to that used in <cit.>. The number N = 665,293 represents the number of events in the all-sky sample, and N' = 19,452 is the number in the sample within 15 degrees of NGC 1068. In principle the product is over all N events, but we follow the IceCube analysis in assuming there are no signal events from NGC 1068 at greater than 15 degrees. The signal probability density function (pdf) f_ signal is the product f_ signal(x_i) ≈1/2πsin(ψ̂) f_ energy(E | sinδ, γ, μ_ ns, m_ϕ, g) f_ spatial(ψ_i | E, σ, sinδ, γ), where ψ is the angular separation between a given neutrino event and the source coordinates. Here we carry over the approximation made in the IceCube analysis that an angular error component may be dropped from the signal and background pdfs due to weak dependence on γ. Furthermore, since we only consider NGC 1068, we drop explicit reference to declination in each function. When m_ϕ and g vanish, f_ energy reduces to the function used in the IceCube analysis. Our modeling of the modified energy pdf in the presence of νSI is discussed in detail in sec:energy-pdf. Neutrino self-interactions can lead to “echo” signals appearing to come from a displaced origin, but the highly forward scattering of neutrinos at these energies means that such echoes would occur at negligible angles from the original source, with typical scattering angles ≲ 10^-7 for 100 TeV neutrinos <cit.>. Therefore, we take the spatial pdf to be independent of neutrino self-interactions. Because the atmospheric neutrinos that make up the background are not propagating an appreciable distance (or optical depth) between production and detection, we also take the background pdf to be independent of neutrino self-interactions. Therefore, in both cases we adopt the spatial pdf for the signal and background pdf from IceCube. We take as test hypothesis (hereafter labeled H_1) the presence of neutrino self-interactions with g≠ 0, and as null hypothesis (H_0) the case g = 0 and combination of background and signal with parameters given by the best fit IceCube result, namely γ^∗ = 3.26, n_s^∗ = 79. As test statistic we define the log-likelihood ratio λ ≡ 2log( ℒ_H1/ℒ_H0). Large values of λ favor the test hypothesis H_1, and in the large-sample limit this test statistic should follow a chi-square distribution <cit.>, which we will use in sec:results to set confidence regions. § RESULTS We evaluated the test statistic λ defined in eq:llh-def on a grid of 19,500 points in parameter space, comprised of N_g = 10 values of log_10(g) (from -3 to -0.15), N_m = 10 values of log_10(m_ϕ/ MeV) (from -0.2 to +1.3), N_γ = 13 values of γ (from 2.8 to 3.6), and N_n = 15 values of n_s (from 35 to 125). The values of m and g were chosen to focus on the parameter space not covered by lab and BBN bounds, and to include the point g = 0.10, m_ϕ = 14 MeV, where the likelihood was maximized (though with low significance) in <cit.>. We then use a spline interpolation of the test statistic, with results shown in fig:m-g-scan for both scenarios. In fig:m-g-scan we plot the magnitude of the difference |Δλ| between the test statistic at a given point and the overall test-statistic maximum, whose value is listed. The significance of the test result, and confidence regions, are evaluated using a 2-parameter chi-square distribution, representing the difference in dimensionality between the parameter space of null and test hypotheses. We treat the spectral index, γ, and the number of signal events, n_s, as nuisance parameters. For a given pair g, m_ϕ we take the values of γ and n_s which together maximize the test statistic. For the flavor-universal case, the maximum value of λ is 6.17 (p-value 0.0457), occurring at g = 0.341, m_ϕ = 9.26 MeV, and indicated by a “+” in fig:m-g-scan-1. For the ν_τ-ν_τ case, the maximum value of λ is 6.15 (p-value 0.0462), occurring at g = 0.165, m_ϕ = 4.30 MeV. In each case, there is insufficient evidence to reject the null hypothesis. We can see from the plot that λ is essentially constant for a large region but decreases significantly as g increases and m_ϕ decreases; we use the 99% confidence region to set bounds in sec:conclusions. The similarity between the results for each case reflects energy smearing and flavor mixing built into the energy pdf. We also test the effect of varying the neutrino mass within the bounds discussed in sec:flux-modeling. The above results take the the heaviest allowed mass scale, with m_1's range limited to [0, 0.02] eV in the normal ordering. Here we subtract 0.01 eV from all m_i, half of the available range in m_i's given the bounds on m_1, and repeat the above analysis, with results plotted in fig:m-g-scan-lowermass. For Case 1, λ_ max = 6.20 at m_ϕ = 4.30 MeV, g = 0.16. For Case 2, λ_ max = 6.34 at m_ϕ = 2.01 MeV, g = 0.08. While the significance of some regions of parameter space have changed moderately, the boundary of the 99% confidence region is nearly unchanged. Qualitatively, we attribute this to the energy smearing and flavor mixing mentioned above, which render the details of these resonance structures indistinguishable in the energy pdf. This change in the neutrino mass would also affect the resonant energies, but evidently not enough to significantly affect the outcome. § DISCUSSION AND CONCLUSIONS In this paper we have used the data release accompanying the recent IceCube detection of neutrinos from NGC 1068 <cit.> to place limits on neutrino self-interactions. The results presented in fig:m-g-scan are similar for the flavor-diagonal versus ν_τ-specific cases, with flavor-diagonal providing a slightly stronger bound. While different self-interaction matrices g_αβ lead to different resonance structure in the incident flux, the significant amount of energy smearing in the muon track energy pdf renders this information inaccessible, so the pdfs – and therefore results of the statistical analysis – are very similar. The similar flavor-universal and ν_τ-specific results have different interpretations in relation to existing bounds. Flavor-universal self-interactions are both more accessible experimentally and more often examined in relation to results from IceCube and other experiments. For example, lab bounds on flavor-universal self-interactions include kaon and pion decay constraints on electron-flavor self-interactions, while bounds on the ν_τ-only case from Higgs and Z decays are less restrictive <cit.>. As a result, while our flavor-universal results are not as strong as existing bounds, our ν_τ-specific results are much stronger than lab-based, and comparable (especially at lower energies) to those derived from the IceCube diffuse flux, as seen in fig:constraints. The constraints plotted in fig:constraints all have differences in assumptions and methodology, and a joint analysis of the IceCube data could be valuable. In sec:intro, we noted that the high-energy starting event (HESE) sample has fewer events and more uncertainty in neutrino direction, but less uncertainty in the incident neutrino energy. On balance, the comparison fig:constraint-case1 demonstrates that the present point-source data from NGC 1068 does not outweigh the sensitivity of the HESE data. However, the analysis leading to the discovery of NGC 1068 as a source of high-energy neutrinos also tested a number of other active galaxies and blazars; while none yet reach the same discovery threshold, a few had suggestive excesses of ∼ 3.5 σ. With more years of data taking and greater sensitivity of future upgrades, it is possible that a larger point source dataset could lead to significantly tighter constraints. As described in sec:energy-pdf, modeling of IceCube's energy pdf is necessary for this analysis, but challenging in general. We are fortunate here that BBN constraints generally restrict our new physics effects to the easier-to-model high energy end of the spectrum, but future analyses for which new physics could affect the low-energy end of the spectrum would have to improve upon this. Furthermore, the assumption shared by our analysis and IceCube of a strict power-law spectrum for neutrinos from AGN emission is unlikely to be a precise representation of the physics across 4 decades of energy. In fact, for larger mediator masses the energy pdfs in sec:energy-pdf look qualitatively similar to what one might expect if the true spectrum is a broken power law, and the moderate preference for νSI along a diagonal curve in the m_ϕ-g plane (fig:m-g-scan-2) could be the result of such a degeneracy. With more data from neutrino and other observatories, future improvements to modeling of AGN neutrino spectra <cit.> could in turn lead to different constraints on neutrino self-interactions. While this work was in preparation, another paper appeared using different methodology to address the question of NGC 1068 bounds on neutrino self-interactions <cit.>. In contrast with our unbinned likelihood analysis of the data, their work compares calculations of binned event rates in the presence or absence νSI (not restricted to resonant s-channel interactions), based on IceCube's effective area but not the NGC 1068 data. Within the parameter space region where our analyses overlap, Ref. <cit.> finds a constraint curve for flavor-universal self-interactions that is qualitatively similar in appearance to fig:m-g-scan-1 but with a weaker bound. I am very grateful to William Luszczak, Peter Denton and Tristan Smith for useful conversations. This work used data and pdfs, as well as code for interfacing with these, from the IceCube NGC 1068 data release <cit.>. § ENERGY PDF Here we provide further details regarding our modeling of the energy probability density function (pdf) – f_ energy referenced in eq:signal-pdf – used in our statistical analysis. For convenience, we work in energy parameters ϵ≡log_10(E/ GeV), where the incident neutrino has ϵ_ν = log(E_ν) and the secondary muon has true energy ϵ_μ = log(E_μ) and reconstructed (observed) energy ϵ̂_μ = log(Ê_μ). IceCube provides effective areas as well as the overall energy pdf in the absence of neutrino self-interactions, f(ϵ̂_μ|γ), which they have obtained via Monte Carlo simulation. These simulations include many effects which are impractical to include in a phenomenological analysis such as ours, for instance, detailed modeling of photodetector response and the propagation of Cherenkov light in Antarctic ice. However, by incorporating the dominant physical processes of muon energy loss and energy reconstruction uncertainty, the energy pdf can be recreated sufficiently well, as will be shown below. Formally, the energy pdf can be expressed as (see e.g. <cit.>) f(ϵ̂_μ|γ,g,m_ϕ) = N^-1∫ dϵ_ν P_ prop.(ϵ̂_μ | ϵ_ν) P_ int.(ϵ_ν|γ,g,m_ϕ), where P_ int.(ϵ_ν | γ, g, m_ϕ) is the relative probability of a neutrino having energy ϵ_ν (among those which interact and also produce muons that reach the detector), P_ prop.(ϵ̂_μ | ϵ_ν) is the probability of obtaining reconstructed muon energy ϵ̂_μ given neutrino of energy ϵ_ν, and N is a normalization factor. For this analysis we will normalize the final pdf on the interval ϵ̂_μ∈ [2,6]; therefore in the following discussion we drop overall multiplicative constants that would ultimately be absorbed into the definition of N. We also note that in principle eq:epdf-formal should include directional dependence; since we are only considering one source candidate we suppress this notation for ease of reading. We can write P_ int. as P_ int.(ϵ_ν|γ,g,m_ϕ) ∝ 10^ϵ_νΦ(ϵ_ν | γ, g, m_ϕ) A_ eff.(ϵ_ν), where A_ eff. is the detector effective area for muon neutrino charged-current interactions along the incident direction. We include the Jacobian factor |dE_ν/dϵ_ν| = ln(10) 10^ϵ_ν→ 10^ϵ_ν to ensure that we obtain a pdf in ϵ_ν rather than E_ν. We now turn to P_ prop.(ϵ̂_μ | ϵ_ν), the probability that a neutrino of energy ϵ_ν (among those which do interact and produce a muon that passes through the instrumented volume) leads to a muon whose energy is reconstructed to be ϵ̂_μ. We use tabulated IceCube values for this function (see Fig. 4 of <cit.> and tabulated data in the associated data release); these represent the result of extensive Monte Carlo simulation and are provided in half-decade energy bins. We use a spline interpolation, taking the given values at the bin center, to evaluate P_ prop. in eq:epdf-formal. fig:no-nusi-epdfs shows the results of our modeling, in comparison to the IceCube pdfs, for two choices of spectral index (γ = 2.5 and 3.5). We also show an example of the effect of νSI, for the choice g = 0.1, m_ϕ = 3 MeV. Note that the distinct resonant dips in the flux due to different neutrino mass states are smoothed together by the uncertainty in incident neutrino energy. While the modeling is less precise at low energies, BBN bounds on mediator masses restrict effects of νSI to higher energies, where the slope of the pdf matches very well. We note that the normalization factor is influenced by the modeling of low-energy physics, but does not affect the final version of the pdf that we use, as discussed below. Finally, we use these modeled results to obtain the energy pdf used in our analysis in the following way. For a given choice of parameters g, m_ϕ, at each energy we find the ratio R(ϵ̂_μ | γ, g, m_ϕ) ≡ P(ϵ̂_μ | γ, g, m_ϕ) / P(ϵ̂_μ | γ, 0, 0), then multiply this ratio by the original IceCube pdf: f_ energy(ϵ̂_μ | γ, g, m_ϕ) = R(ϵ̂_μ | γ, g, m_ϕ) f_ I.C.(γ). Examples of the resulting energy pdfs are shown in fig:full-epdf-examples. While our modeling ensures that we correctly account for the effect of detector physics on the resonant dips in the spectrum, this ratio ensures that in the limit g→ 0 we regain the exact IceCube pdfs and therefore the same best-fit parameters in the absence of νSI. This ratio method also ensures that the normalization factor for the modeled pdf, which is influenced by the less-precise low-energy modeling, does not ultimately affect the outcome of the analysis. Overall, the loss of energy information due to propagation and reconstruction affects the strength of conclusions one can draw about self-interactions. Our modeling of the energy pdf therefore builds into the analysis an accounting of this effect. Furthermore, since not all events contributed equally to the signal in IceCube's original result it follows that the sensitivity to neutrino self-interactions is uneven across the 10^2 to 10^5.2 GeV energy range encompassing the observed muon energies, another effect that our methodology accounts for. 99 Kolb:1987qy E. W. Kolb and M. S. Turner, “Supernova SN 1987a and the Secret Interactions of Neutrinos,” Phys. Rev. D 36, 2895 (1987) doi:10.1103/PhysRevD.36.2895 Shoemaker:2015qul I. M. Shoemaker and K. Murase, “Probing BSM Neutrino Physics with Flavor and Spectral Distortions: Prospects for Future High-Energy Neutrino Telescopes,” Phys. Rev. D 93, no.8, 085004 (2016) doi:10.1103/PhysRevD.93.085004 [arXiv:1512.07228 [astro-ph.HE]]. Creque-Sarbinowski:2020qhz C. Creque-Sarbinowski, J. Hyde and M. Kamionkowski, “Resonant neutrino self-interactions,” Phys. Rev. D 103, no.2, 023527 (2021) doi:10.1103/PhysRevD.103.023527 [arXiv:2005.05332 [hep-ph]]. Esteban:2021tub I. Esteban, S. Pandey, V. Brdar and J. F. Beacom, “Probing Secret Interactions of Astrophysical Neutrinos in the High-Statistics Era,” [arXiv:2107.13568 [hep-ph]]. Ng:2014pca K. C. Y. Ng and J. F. Beacom, “Cosmic neutrino cascades from secret neutrino interactions,” Phys. Rev. D 90, no.6, 065035 (2014) [erratum: Phys. Rev. D 90, no.8, 089904 (2014)] doi:10.1103/PhysRevD.90.065035 [arXiv:1404.2288 [astro-ph.HE]]. Barenboim:2019tux G. Barenboim, P. B. Denton and I. M. Oldengott, “Constraints on inflation with an extended neutrino sector,” Phys. Rev. D 99, no.8, 083515 (2019) doi:10.1103/PhysRevD.99.083515 [arXiv:1903.02036 [astro-ph.CO]]. Blinov:2019gcj N. Blinov, K. J. Kelly, G. Z. Krnjaic and S. D. McDermott, “Constraining the Self-Interacting Neutrino Interpretation of the Hubble Tension,” Phys. Rev. Lett. 123, no.19, 191102 (2019) doi:10.1103/PhysRevLett.123.191102 [arXiv:1905.02727 [astro-ph.CO]]. Mazumdar:2020ibx A. Mazumdar, S. Mohanty and P. Parashari, “Flavour specific neutrino self-interaction: H _0 tension and IceCube,” JCAP 10, 011 (2022) doi:10.1088/1475-7516/2022/10/011 [arXiv:2011.13685 [hep-ph]]. Huang:2017egl G. y. Huang, T. Ohlsson and S. Zhou, “Observational Constraints on Secret Neutrino Interactions from Big Bang Nucleosynthesis,” Phys. Rev. D 97, no.7, 075009 (2018) doi:10.1103/PhysRevD.97.075009 [arXiv:1712.04792 [hep-ph]]. Berryman:2022hds J. M. Berryman, N. Blinov, V. Brdar, T. Brinckmann, M. Bustamante, F. Y. Cyr-Racine, A. Das, A. de Gouvêa, P. B. Denton and P. S. B. Dev, et al. “Neutrino Self-Interactions: A White Paper,” [arXiv:2203.01955 [hep-ph]]. Berryman:2018ogk J. M. Berryman, A. De Gouvêa, K. J. Kelly and Y. Zhang, “Lepton-Number-Charged Scalars and Neutrino Beamstrahlung,” Phys. Rev. D 97, no.7, 075030 (2018) doi:10.1103/PhysRevD.97.075030 [arXiv:1802.00009 [hep-ph]]. IceCube:2018dnn M. G. Aartsen et al. [IceCube, Fermi-LAT, MAGIC, AGILE, ASAS-SN, HAWC, H.E.S.S., INTEGRAL, Kanata, Kiso, Kapteyn, Liverpool Telescope, Subaru, Swift NuSTAR, VERITAS and VLA/17B-403], “Multimessenger observations of a flaring blazar coincident with high-energy neutrino IceCube-170922A,” Science 361, no.6398, eaat1378 (2018) doi:10.1126/science.aat1378 [arXiv:1807.08816 [astro-ph.HE]]. IceCube:2018cha M. G. Aartsen et al. [IceCube], “Neutrino emission from the direction of the blazar TXS 0506+056 prior to the IceCube-170922A alert,” Science 361, no.6398, 147-151 (2018) doi:10.1126/science.aat2890 [arXiv:1807.08794 [astro-ph.HE]]. Kelly:2018tyg K. J. Kelly and P. A. N. Machado, “Multimessenger Astronomy and New Neutrino Physics,” JCAP 10, 048 (2018) doi:10.1088/1475-7516/2018/10/048 [arXiv:1808.02889 [hep-ph]]. Ioka:2014kca K. Ioka and K. Murase, “IceCube PeV–EeV neutrinos and secret interactions of neutrinos,” PTEP 2014, no.6, 061E01 (2014) doi:10.1093/ptep/ptu090 [arXiv:1404.2279 [astro-ph.HE]]. Bustamante:2020mep M. Bustamante, C. Rosenstrøm, S. Shalgar and I. Tamborra, “Bounds on secret neutrino interactions from high-energy astrophysical neutrinos,” Phys. Rev. D 101, no.12, 123024 (2020) doi:10.1103/PhysRevD.101.123024 [arXiv:2001.04994 [astro-ph.HE]]. IceCube:2022der R. Abbasi et al. [IceCube], “Evidence for neutrino emission from the nearby active galaxy NGC 1068,” Science 378, no.6619, 538-543 (2022) doi:10.1126/science.abg3395 [arXiv:2211.09972 [astro-ph.HE]]. IceCube:ngc1068-data IceCube Collaboration, “Evidence for neutrino emission from the nearby active galaxy NGC 1068. Dataset.” (2022) doi:10.1126/10.21234/03fq-rh11 Cline:2022qld J. M. Cline, S. Gao, F. Guo, Z. Lin, S. Liu, M. Puel, P. Todd and T. Xiao, “Blazar Constraints on Neutrino-Dark Matter Scattering,” Phys. Rev. Lett. 130, no.9, 091402 (2023) doi:10.1103/PhysRevLett.130.091402 [arXiv:2209.02713 [hep-ph]]. Ferrer:2022kei F. Ferrer, G. Herrera and A. Ibarra, “New constraints on the dark matter-neutrino and dark matter-photon scattering cross sections from TXS 0506+056,” JCAP 05, 057 (2023) doi:10.1088/1475-7516/2023/05/057 [arXiv:2209.06339 [hep-ph]]. Cline:2023tkp J. M. Cline and M. Puel, “NGC 1068 constraints on neutrino-dark matter scattering,” [arXiv:2301.08756 [hep-ph]]. Stecker:1991vm F. W. Stecker, C. Done, M. H. Salamon and P. Sommers, “High-energy neutrinos from active galactic nuclei,” Phys. Rev. Lett. 66, 2697-2700 (1991) [erratum: Phys. Rev. Lett. 69, 2738 (1992)] doi:10.1103/PhysRevLett.66.2697 Esteban:2020cvm I. Esteban, M. C. Gonzalez-Garcia, M. Maltoni, T. Schwetz and A. Zhou, “The fate of hints: updated global analysis of three-flavor neutrino oscillations,” JHEP 09, 178 (2020) doi:10.1007/JHEP09(2020)178 [arXiv:2007.14792 [hep-ph]]. NuFIT 5.2 (2022), www.nu-fit.org. Planck:2018vyg N. Aghanim et al. [Planck], “Planck 2018 results. VI. Cosmological parameters,” Astron. Astrophys. 641, A6 (2020) [erratum: Astron. Astrophys. 652, C4 (2021)] doi:10.1051/0004-6361/201833910 [arXiv:1807.06209 [astro-ph.CO]]. Braun:2008bg J. Braun, J. Dumm, F. De Palma, C. Finley, A. Karle and T. Montaruli, “Methods for point source analysis in high energy neutrino telescopes,” Astropart. Phys. 29, 299-305 (2008) doi:10.1016/j.astropartphys.2008.02.007 [arXiv:0801.1604 [astro-ph]]. Murase:2019xqi K. Murase and I. M. Shoemaker, “Neutrino Echoes from Multimessenger Transient Sources,” Phys. Rev. Lett. 123, no.24, 241102 (2019) doi:10.1103/PhysRevLett.123.241102 [arXiv:1903.08607 [hep-ph]]. Cowan:1998ji G. Cowan, “Statistical data analysis,” Oxford University Press (1998) Murase:2022feu K. Murase and F. W. Stecker, “High-Energy Neutrinos from Active Galactic Nuclei,” [arXiv:2202.03381 [astro-ph.HE]]. Creque-Sarbinowski:2021nil C. Creque-Sarbinowski, M. Kamionkowski and B. Zhou, “Seeking neutrino emission from AGN through temporal and spatial cross-correlation,” Phys. Rev. D 105, no.12, 123035 (2022) doi:10.1103/PhysRevD.105.123035 [arXiv:2111.08012 [astro-ph.HE]]. Doring:2023vmk C. Döring and S. Vogl, “Astrophysical neutrino point sources as a probe of new physics,” [arXiv:2304.08533 [hep-ph]]. Braun:thesis J. Braun, “A Maximum-Likelihood Search for Neutrino Point Sources with the AMANDA-II Detector,” PhD Thesis (2009) IceCube:2021xar R. Abbasi et al. [IceCube], “IceCube Data for Neutrino Point-Source Searches Years 2008-2018,” doi:10.21234/CPKQ-K003 [arXiv:2101.09836 [astro-ph.HE]].
http://arxiv.org/abs/2307.02538v1
20230705180002
Searches for dark matter decay with ultra-high-energy neutrinos endure backgrounds
[ "Damiano F. G. Fiorillo", "Victor Valera", "Mauricio Bustamante", "Walter Winter" ]
astro-ph.HE
[ "astro-ph.HE", "hep-ph" ]
damiano.fiorillo@nbi.ku.dk Niels Bohr International Academy, Niels Bohr Institute, University of Copenhagen, 2100 Copenhagen, Denmark vvalera@nbi.ku.dk Niels Bohr International Academy, Niels Bohr Institute, University of Copenhagen, 2100 Copenhagen, Denmark mbustamante@nbi.ku.dk Niels Bohr International Academy, Niels Bohr Institute, University of Copenhagen, 2100 Copenhagen, Denmark walter.winter@desy.de Deutsches Elektronen-Synchrotron DESY, Platanenallee 6, 15738 Zeuthen, Germany Next-generation ultra-high-energy (UHE) neutrino telescopes, presently under planning, will have the potential to probe the decay of heavy dark matter (DM) into UHE neutrinos, with energies in excess of 10^7 GeV. Yet, this potential may be deteriorated by the presence of an unknown background of UHE neutrinos, cosmogenic or from astrophysical sources, not of DM origin and seemingly large enough to obscure the DM signature. We show that leveraging the angular and energy distributions of detected events safeguards future searches for DM decay against such backgrounds. We focus on the radio-detection of UHE neutrinos in the planned IceCube-Gen2 neutrino telescope, which we model in state-of-the-art detail. We report promising prospects for the discovery potential of DM decay into UHE neutrinos, the measurement of DM mass and lifetime, and limits on the DM lifetime, despite the presence of a large background, without prior knowledge of its size and shape. Searches for dark matter decay with ultra-high-energy neutrinos endure backgrounds Walter Winter 0000-0001-7062-0289 August 1, 2023 ================================================================================== § INTRODUCTION About 85% of the matter in the Universe is dark, not interacting electromagnetically nor strongly. Evidence for dark matter (DM) comes from velocity dispersion and rotation curves in galaxies and galaxy clusters <cit.>, gravitational lensing measurements in galaxy clusters and collisions of galaxy clusters <cit.>, the cosmic microwave background (CMB) anisotropy, and the large-scale structure of the Universe <cit.>. Because this evidence relates to the gravitational effect that DM has on the visible Universe, it provides little guidance to understand other possible interactions between DM and Standard-Model particles that could reveal its nature. The absence of such guidance has spurred a broad program to understand the nature of DM, from theory and experiment. From theory, diverse candidates have been proposed as DM constituents; see, ,  <cit.> for a historical review. These include novel particles, such as weakly interacting massive particles <cit.>, axions <cit.>, Majorons <cit.>, and sterile neutrinos <cit.>, and non-particle candidates, such as primordial black holes <cit.>. From experiment, searches for these candidates follow four complementary strategies: collider searches, which attempt to produce DM in high-energy particle collisions; direct DM searches, which look for Galactic DM scattering on dense detector targets; astrophysical searches, which look for the impact that DM would have on cosmic particles; and indirect DM searches, which look for products of DM self-annihilation or decay. We focus on indirect searches for the decay of heavy DM particles, with masses in excess of 10 PeV, into neutrinos. Our choice is motivated by upcoming experimental capabilities (more on this later) that, for the first time, could allow us to probe DM decay using ultra-high-energy (UHE) neutrinos, with energies in excess of 10 PeV. (We do not consider DM self-annihilation because, for heavy DM, its cross section is strongly constrained by unitarity bounds <cit.>.) Already in the last decade, the breadth of DM indirect searches widened after the discovery by the IceCube neutrino telescope of high-energy neutrinos of cosmic origin, with energies between 10 TeV and 10 PeV <cit.>. In fact, initially the discovery led to speculation that heavy DM decaying to neutrinos could explain their flux in the 10–100 TeV range <cit.>. Nowadays, more conventional astrophysical explanations are favored <cit.>, but IceCube observations still set competitive bounds on the DM lifetime and self-annihilation cross section for DM masses below 10 PeV <cit.>. These bounds are complementary to the ones obtained from gamma-ray observations in a similar mass range; see, ,  <cit.> In the next decade, a host of new neutrino telescopes, presently in different stages of planning, design, and prototyping, will target the long-sought discovery of UHE neutrinos, between 100 PeV and 10 EeV, that were first predicted in 1969 <cit.>. They include AugerPrime <cit.>, BEACON <cit.>, EUSO-SPB2 <cit.>, GCOS <cit.>, GRAND <cit.>, POEMMA <cit.>, PUEO <cit.>, RNO-G <cit.>, TAROGE <cit.>, and the radio array of IceCube-Gen2, the envisioned upgrade of IceCube <cit.>. Ultra-high neutrinos will bring new insight into astrophysics <cit.> and fundamental physics <cit.>. In particular, they will allow us to test the decay of heavier DM particles, with masses from 10 PeV to 100 EeV; see, ,  <cit.>. However, the capacity of UHE neutrino telescopes to probe DM decay critically depends on an unknown quantity: the diffuse flux of UHE neutrinos that do not originate from DM decay, and that acts as a background to DM searches. If this background is large, it could obscure more subtle signatures of neutrinos from DM decay. These background neutrinos—hereafter dubbed “non-DM neutrinos”—are expected from the interaction of ultra-high-energy cosmic-ray (UHECRs) with ambient matter or radiation inside the extragalactic astrophysical sources where they are accelerated—, astrophysical neutrinos—or with cosmic photon backgrounds during their propagation to Earth—, cosmogenic neutrinos <cit.>. We expand on them later (Sec <ref>). The situation is worsened by the large variety, in size and shape, in the current theoretical predictions of UHE astrophysical and cosmogenic neutrino fluxes; see, , Fig. 2 in  <cit.>. Without a firm estimate of the non-DM UHE neutrino background, it would seem that the mere discovery of UHE neutrinos may be insufficient to establish whether they originate from DM decay or not. Were the possibility of a background of non-DM UHE neutrinos ignored, the evidence for or against DM decay could be interpreted erroneously. We show that these difficulties can be overcome by leveraging known differences between the distributions in energy and arrival directions of UHE neutrinos from DM decay and non-DM UHE neutrinos. Regarding energy, neutrinos from DM decay are produced predominantly at an energy of E_ν∼ m_DM/2, where m_DM is the mass of the DM particle, whereas the spectrum of non-DM neutrinos is expected to be relatively extended in energy. Regarding direction, the flux of neutrinos from DM decay should peak towards the Galactic Center (GC), where DM is concentrated, whereas the diffuse flux of non-DM neutrinos is expected to be isotropic. The above features are essential and generic to UHE neutrinos of DM and non-DM origin alike, and are broadly present in models of their fluxes. By relying on them, our methods apply broadly, regardless of the specific nature of the DM particle, of the relative size of the fluxes of neutrinos from DM decay and of non-DM neutrinos, and of the specific shape of the energy spectrum of non-DM neutrinos. Our strategy is similar to studies of the decay of TeV–PeV DM that use IceCube data (see, ,  <cit.>), but has one important advantage. In the TeV–PeV range, it is possible that non-DM astrophysical processes produce an excess of neutrinos towards the GC that could obfuscate a signal of neutrinos from DM decay <cit.>. In contrast, in the UHE range, we expect no astrophysical process to produce neutrinos towards the GC, making the search for DM decay cleaner. We gear our forecasts to the radio-detection of UHE neutrinos in IceCube-Gen2, since it is among the largest upcoming neutrino telescopes under consideration and is presently in an advanced stage of planning. We model neutrino detection via the same state-of-the-art simulations used in  <cit.>, which account for UHE neutrino propagation inside Earth, detector geometry, energy- and direction-dependent detector response, and energy and angular detector resolution. Figure <ref> summarizes our main findings; we defer details to later. They are two-fold: on the discovery of DM decay and on lower limits on the DM lifetime. For the first time, we report robust discovery prospects for UHE neutrinos from DM decay, , the values of DM mass and lifetime that would allow us not only to detect UHE neutrinos, but also to claim that they originate from DM decay, at least partially. In summary, the presence of about 30 neutrinos from DM decay in a 10-year event sample, would allow us to claim their DM origin. Separately, we find that while the background of non-DM UHE neutrinos weakens the lower limits on DM lifetime, an energy and angular analysis mitigates this weakening, keeping the limits competitive with present-day ones, at worst. The overarching message of our results is that, despite our ignorance of the background of astrophysical and cosmogenic UHE neutrinos, the discovery of UHE neutrinos will constitute a sensitive probe of heavy DM decay. We present our results and methods to inform future forecasts and searches. This paper is structured as follows. In Sec. <ref> we discuss the main features of DM and non-DM neutrino production, and highlight the models that we choose as benchmark for this work. In Sec. <ref> we describe how we compute UHE neutrino-induced event rates at IceCube-Gen2. In Sec. <ref> we obtain the prospects of IceCube-Gen2 for the discovery of DM neutrinos. In Sec. <ref> we forecast bounds on the DM lifetime if no evidence for DM decay is found. In Sec. <ref>, we conclude. § FLUXES OF UHE NEUTRINOS The diffuse flux of UHE astrophysical and cosmogenic neutrinos, itself a target of discovery <cit.>, could be a background to searches for the diffuse flux of UHE neutrinos from DM decay. Fortunately, these fluxes differ in their distributions in energy and direction. In energy, the non-DM background neutrino flux is spread out, while that of neutrinos from DM decay is more concentrated. In direction, the non-DM background neutrino flux is isotropic, while that of neutrinos from DM decay peaks towards the Galactic Center. We review these features below; later (Secs. <ref> and <ref>), we use them to distinguish between the fluxes. §.§ UHE astrophysical and cosmogenic neutrinos UHE neutrinos are expected from the interaction of UHECR protons, with energies E_p ≳ 100 PeV, with ambient matter <cit.> or radiation <cit.>, inside the extragalactic astrophysical sources where they are accelerated—, astrophysical neutrinos—or with cosmic photon backgrounds during their propagation in extragalactic space—, cosmogenic neutrinos. These interactions produce high-energy pions, and other intermediate particles, that promptly decay into high-energy neutrinos via π^- →μ^- + ν̅_μ, followed by μ^- → e^- + ν̅_e + ν_μ, and their charge-conjugated processes, where each neutrino has energy E_ν≃ E_p/20. We focus on the diffuse UHE neutrino flux, , the sum of the UHE neutrino emission—astrophysical or cosmogenic—from all sources, across all redshifts. The cosmogenic neutrino flux is isotropic, since extragalactic magnetic fields scramble the trajectories of neutrino-producing UHECRs; see sky_map_flux. The angular distribution of the astrophysical neutrino flux reflects that of the neutrino sources in the sky. Because UHE neutrino sources are in all likelihood extragalactic, we assume that they are isotropically distributed, and so the diffuse neutrino flux from them is isotropic, too. The discovery of the diffuse flux of astrophysical and cosmogenic neutrinos is one of the main goals of the next generation of neutrino telescopes <cit.>. (The associated discovery of point sources of UHE neutrinos is explored in  <cit.>.) Yet, in our work, they represent a background to the discovery of neutrinos from DM decay. Cosmogenic neutrinos were first proposed in the late 1960s <cit.>, as a natural consequence of the interaction of UHECRs on the CMB <cit.>. They constitute a nearly guaranteed contribution in the UHE neutrino range, since their production only relies on the existence of UHECRs and of the CMB (and also of the extragalactic background light). The flux of cosmogenic neutrinos depends on the properties of UHECRs—their spectrum, maximum energies, and mass composition—and of their sources—their distribution in redshift. Because these properties are known uncertainly <cit.>, the flux predictions vary widely, in size and shape; see, ,  <cit.>. Astrophysical UHE neutrinos are produced inside astrophysical sources. In this case, the target photons need not be the CMB, but low-energy photons present in the environments in which UHECRs are injected. Flux predictions are made more complex because they depend also on the physical conditions inside the sources, including the shape of the photon spectra, the matter density, and the geometry of the neutrino production region. Numerous models have been proposed for various candidate source classes, including active galactic nuclei (AGN) <cit.>, gamma-ray bursts (GRBs) <cit.>, newborn pulsars <cit.>, and tidal disruption events (TDEs) <cit.>. In some models, the diffuse astrophysical neutrino flux can be comparable or larger than the cosmogenic neutrino flux; ,  <cit.>. Thus, there is a large number of competing theoretical predictions of the cosmogenic and astrophysical UHE neutrino flux; see Fig. 2 in  <cit.> and Fig. 6 in  <cit.> for an overview. The range of predicted UHE neutrino fluxes spans several orders of magnitude. The highest flux predictions <cit.> would yield about 30 events per year in the radio array of IceCube-Gen2, making them easily discoverable; the lowest <cit.>, less than one event in 10 years, making them undiscoverable (see Fig. 1 and Table I in  <cit.> for details). Most flux predictions share some common features; , they can be roughly described as a power-law flux—from neutrino production via proton-matter interactions—a bump-like flux—from neutrino production via proton-photon interactions—or a combination of both. The resemblance between different flux predictions is largely superficial, since they differ in a number of important assumptions, , the identity of the neutrino sources, the physical conditions in the region of neutrino production, the neutrino production mechanism, and the UHECR observations on which the neutrino predictions are based. Regardless, in our forecasts below, we pivot on these superficial similarities and choose a benchmark background flux of UHE neutrinos that is representative of the range of theoretical predictions. Figure <ref> shows the two illustrative flux predictions that we select as benchmark “non-DM” UHE neutrino background for our analysis. They represent a large background and an intermediate one; later, we complement them with a null-background scenario. We base both on the cosmogenic neutrino flux predicted by Bergman & van Vliet <cit.> by fitting the simulated UHECR energy spectrum and mass composition at Earth to recent data from the Telescope Array (TA) <cit.>. (This is flux model 4 in  <cit.>.) Because TA data favors a light UHECR mass composition and high maximum rigidity, the resulting cosmogenic neutrino flux is large: diffuse_fluxes shows that it saturates the present-day upper limits from IceCube <cit.> and the Pierre Auger Observatory <cit.>. Large non-DM background This is the full cosmogenic neutrino flux predicted by Bergman & van Vliet <cit.>, which yields about 33 events per year in the radio array of IceCube-Gen2; see Figs. 3 and 4 and Table I in  <cit.>, and diff_event_rate below. Because this is as large a flux of UHE neutrinos as is allowed by present-day upper limits (diffuse_fluxes), it is about the largest background of non-DM UHE neutrinos that we could face in a search for DM decay. Intermediate non-DM background This is the Bergman & van Vliet cosmogenic flux scaled down to 10% of its size, which yields about 3 events per year in the radio array of IceCube-Gen2. Null background The ideal scenario for the discovery of UHE neutrinos from DM is the absence of non-DM UHE neutrinos. This has been the scenario adopted in previous forecasts of DM decay into UHE neutrinos <cit.>. We maintain it here as a baseline against which we compare our forecasts including a non-DM background. Figure <ref> shows the all-flavor background flux, but when computing event rates (Sec. <ref>) we sum the individual contributions of the fluxes of ν_e, ν̅_e, ν_μ, ν̅_μ, ν_τ, and ν̅_τ for this model, as shown in Fig. 6 in  <cit.>. For the purpose of discovering neutrinos from DM decay, what matters is not whether the non-DM UHE neutrino background is cosmogenic or astrophysical, but rather that its angular and energy distributions are different from those of the flux of neutrinos from DM decay (Sec. <ref>). We point out these differences explicitly in Sec. <ref>. Admittedly, in choosing a benchmark non-DM UHE neutrino background, we make a specific choice of the shape of its energy spectrum. This choice is necessary to be able to generate simulated samples of detected events (Sec. <ref>). However, when analyzing these samples (Secs. <ref> and <ref>), we do not assume knowledge of the size or shape of the non-DM background, but instead let them vary, just as an analysis of real detected data would. §.§ UHE neutrinos from dark matter decay The decay of a heavy DM particle, χ, with mass m_ DM≳ 10^7 GeV, into Standard Model particles, leads to the production of high-energy neutrinos. The yield of neutrinos depends on the channels by which the DM particle decays. If the DM particle decays primarily into neutrinos, , χ→ν̅ν, the resulting neutrino flux has a primary contribution that is monoenergetic at E_ν = m_DM/2. We neglect the spread of the energy spectrum due to the thermal velocity of DM, since it is small <cit.>. A secondary contribution comes from electroweak corrections, generated from the emission, by the decay products, of off-shell W and Z bosons that promptly decay into neutrinos; this contribution is present even if DM does not primarily decay to neutrinos. In our analysis, we consider the neutrino flux made up of both primary and secondary contributions. The electroweak corrections unavoidably give rise to gamma rays, electrons, positrons, protons, and anti-protons that are also amenable to indirect detection. Upper limits on their flux indirectly constrain the associated neutrino flux. Notably, for most decay channels (, for hadronic decay channels such as χ→bb), the present-day upper limits on the gamma-ray flux are so strong that the projected limits on the associated UHE neutrino flux are comparable or weaker; see, ,  <cit.> (see also  <cit.> for a comparison of the bounds from gamma-ray and cosmic-ray measurements). However, for leptonic (, χ→ττ) and neutrinophilic (, χ→νν) decay channels, in the DM mass range m_ DM = 10^7–10^10 GeV, the bounds from UHE neutrinos may be comparable or stronger than the present-day bounds from gamma rays. For this reason, we focus exclusively on the neutrinophilic decay channel, χ→νν. We assume equal branching ratios for the decay into each of the three flavors, χ→ν_eν_e, χ→ν_μν_μ, and χ→ν_τν_τ. We treat the flux of ν_e, ν̅_e, ν_μ, ν̅_μ, ν_τ, and ν̅_τ separately; later, we propagate each through the Earth (Sec. <ref>) and compute its contribution to the detected event rate (Sec. <ref>). We compute the neutrino spectra numerically, using the public code  <cit.>, which evolves the particle showers initiated by DM decay, including in detail electroweak corrections, and yields the final-state products of shower evolution. The neutrino spectra from DM decay are notoriously hard to compute precisely, since the emission of soft collinear W^± bosons leads to logarithmically enhanced terms ∝log^2(m_DM/m_W), where m_W is the mass of the W boson, that need to be resummed <cit.>; accounts for this. Thus, from we obtain dN_ν_α/dE_ν and dN_ν̅_α/dE_ν (α = e,μ,τ), the number of ν_α and ν̅_α emitted in a single DM decay per unit energy. These spectra include also the primary monoenergetic contributions at E_ν = m_DM/2 . The diffuse flux of neutrinos that reach the Earth is due to DM decays that occur inside the Galaxy (Gal) and in extragalactic space (EG), , dΦ_ν_α/dE_ν dΩ_ν = dΦ^Gal_ν_α/dE_ν dΩ_ν + dΦ^EG_ν_α/dE_ν dΩ_ν , where Ω_ν is the solid angle. To compute the Galactic contribution, we integrate the neutrino spectrum from a single DM decay over the spatial distribution of DM in the Milky Way. This makes the Galactic neutrino flux anisotropic, since it traces the density of Galactic DM. To compute the extragalactic contribution, we integrate the neutrino spectrum from a single DM decay over the cosmological distribution of DM. This makes the extragalactic neutrino flux isotropic. The Galactic contribution of ν_α is dΦ^Gal_ν_α/dE_ν dΩ_ν = dN_ν_α/dE_ν∫_0^∞ρ_DM(s,b,l)/4πτ_DM m_DM ds , where s is the distance measured from the Earth, b and l are Galactic latitude and longitude and parametrize the neutrino incoming direction, τ_DM is the DM lifetime, and ρ_DM is the density profile of DM in the Galaxy. Figure <ref> shows competing models of the Galactic DM density profiles. The “cuspy” Navarro-Frenk-White (NFW) <cit.> profile—obtained from a numerical fit to N-body simulations of structure formation—and Einasto <cit.> profile—originally proposed to describe stellar systems and later extended to fit the DM halo—peak towards the GC. The “puffy” Burkert profile <cit.>—obtained by a fit to the DM distribution in dwarf galaxies—instead plateaus to a core towards the GC. We pick the NFW and Burkert profiles as representative of the two extremes of the “cusp vs. core” uncertainty in Galactic DM profiles. To produce our main results, in Figs. <ref>, <ref>, <ref>, <ref>, <ref>, <ref>, and <ref>, we adopt the NFW profile; in Figs. <ref>, <ref>, and <ref>, we contrast them against results obtained assuming the Burkert profile. For the NFW profile, we use <cit.> ρ_DM^ NFW(s,b,l) = ρ_0/(r(s,b,l)/r_c)(1+r(s,b,l)^2/r_c^2) , where ρ_0 = 0.33 GeV cm^-3, r_c = 20 kpc, and r(s,b,l)=√(s^2+R_s^2-2 s R_s cos b cos l) is the Galactocentric radius, with R_s = 8.5 kpc. For the Burkert profile, we use ρ^Burkert_DM(s,b,l ) =ρ_s/(1+r(s,b,l)/r_s)(1+r(s,b,l)^2/r_s^2) , with ρ_s=0.712 GeV cm^-3 and r_s=12.67 kpc <cit.>. The extragalactic contribution of ν_α is dΦ^EG_ν_α/dE_ν dΩ_ν = Ω_DMρ_c/4πτ_DMm_DM∫_0^∞dz/H(z). dN_ν_α/dE_ν|_E_ν(1+z) , where z is the redshift, ρ_c = 4.79× 10^-6 GeV cm^-3 is the critical density of the Universe, Ω_DM=0.265 is the fraction of energy density of the Universe in the form of DM, H(z) = H_0√(Ω_Λ+Ω_m (1+z)^3) is the Hubble parameter, H_0 = 1.08× 10^-28 cm^-1 h is the Hubble constant, with h=0.674, Ω_Λ = 0.685 is the vacuum energy density, and Ω_m = 0.315 is the matter energy density. The right-hand side of nu_flux_eg is evaluated at an energy E_ν (1+z) to compensate for the cosmological expansion. Figure <ref> shows the resulting diffuse energy spectrum of UHE neutrinos from DM decay, integrated over all sky directions, for two benchmark values of the DM mass, and separated into the Galactic and extragalactic components only for illustration. The main features of the energy spectrum are a spike of neutrinos close to the energy E_ν = m_DM/2, and a power-law tail at lower energies, from electroweak corrections and, in the case of the extragalactic component, from redshifting. The dominant component of the flux is the Galactic one, due to the nearby DM overdensity in the GC. However, at energies close to the spike, a pile-up of neutrinos from the direct decay χ→νν, redshifted to lower energies, causes the extragalactic contribution to dominate instead in a narrow energy range. Because we assume DM decay into neutrinos of all flavors with equal branching ratios (see above), the all-flavor flux in diffuse_fluxes is split evenly among the three flavors and among neutrinos and anti-neutrinos. §.§ Non-DM neutrinos vs. neutrinos from DM decay The essential differences between the background flux of UHE non-DM neutrinos and the flux of UHE neutrinos from DM decay are in their energy spectrum and in their angular distribution in the sky. Energy spectrum Figure <ref> shows that the energy spectrum of our benchmark non-DM background flux—which is typical of many flux predictions—is more spread out around its maximum compared to the energy spectrum of neutrinos from DM decay, which peaks sharply at E_ν = m_ DM/2. Yet, because of the spread of the latter towards lower energies due to redshifting (compounded, later, by the limited energy resolution of the detector), the differences in spectral shape are not as marked. While the lack of a bump-like feature in the observed spectrum of UHE neutrinos would disfavor DM decay, its observation could be attributed either to DM decay or to a non-DM background flux. Accordingly, the energy spectrum is not the driving factor to discover DM decay (Sec. <ref>), but supplements angular information to set constraints (Sec. <ref>). Angular distribution Figure <ref> shows the angular distribution of the total diffuse flux of UHE neutrinos, integrated over energy. This reveals the critical difference between non-DM and DM neutrinos: the flux of non-DM neutrinos is isotropic, while the flux of DM neutrinos peaks towards the GC, where, under the NFW profile, it is about a factor 20 larger than in the rest of the sky. This contrast is the main driving factor to discover DM decay and place constraints on it. For the puffier Burkert profile, the contrast is milder, which weakens both prospects, as we show later. In either case, because the GC is in the Southern Hemisphere, neutrinos from DM decay coming from this direction are not attenuated by their passage through Earth before reaching IceCube-Gen2 (Sec. <ref>), making it particularly sensitive to this signal. Below (Sec. <ref>), we show that the above differences between the fluxes are mirrored, albeit imperfectly, by corresponding differences in the energy and angular distributions of detected events. § DETECTION OF UHE NEUTRINOS To make realistic forecasts of probes of DM decay into UHE neutrinos, we compute in detail their propagation inside the Earth and their radio-detection in our detector of choice, the radio array of IceCube-Gen2. §.§ Neutrino propagation inside the Earth Upon reaching Earth, UHE neutrinos propagate from its surface, through its interior, to the detector, IceCube-Gen2, situated at the South Pole. While propagating, neutrinos interact with matter underground. Because the neutrino-nucleon cross section, σ_ν N, grows with neutrino energy (at ultra-high energies, roughly as σ_ν N∝ E_ν^0.363 <cit.>), these interactions appreciably attenuate the flux of neutrinos that reaches the detector. Roughly, neutrino interactions attenuate the flux via an exponential dampening factor e^-σ_ν N L, where L is the distance traveled underground. Thus, the attenuation grows with neutrino energy and with distance traveled. At ultra-high energies, neutrinos interact with nucleons predominantly via deep inelastic scattering (DIS) <cit.>. In it, the incoming neutrino scatters off of a parton—a quark or a gluon—of a nucleon are rest, N—a proton or a neutron (N = p, n). The interaction is neutral-current (NC) if mediated by a Z boson, , ν_α + N →ν_α + X (α = e, μ, τ), where X represents final-state hadrons, or charged-current (CC) if mediated by a W boson, , ν_α + N →α + X. The NC neutrino-nucleon cross section, σ_ν_α N^ NC, is about 1/3 of the CC cross section, σ_ν_α N^ CC. At these energies, the cross sections on proton and on neutron are very similar, and the cross sections for different neutrino flavors are nearly equal. When propagating neutrinos inside the Earth, and also when computing detected event rates (Sec. <ref>), we treat separately the NC and CC interactions of neutrinos of different flavor, each with its own cross section. In a DIS interaction, the final-state hadrons receive a fraction y—the inelasticity—of the energy of the interacting neutrino. The final-state leptons receive the remaining fraction, (1-y). The inelasticity follows a probability distribution given by the differential cross sections, either dσ_ν_α N^ NC/dy or dσ_ν_α N^ CC/dy. The distributions peak at y = 0, but they are broad, and depend on the neutrino energy; see, , Fig. 4 in  <cit.>. Thus, as neutrinos propagate inside the Earth, NC interactions shift the neutrino flux to lower energies, while CC interactions deplete the flux. (For ν_τ, the consecutive CC neutrino interactions and decays of the ensuing tauons—known as “ν_τ regeneration”—appreciably counteract the flux dampening; see, , Fig. 8 in  <cit.>.) At ultra-high energies, the flux of upgoing neutrinos, with θ_z > 90^∘, where θ_z is the zenith angle measured from the South Pole, is nearly fully attenuated by the time it reaches the detector. On the contrary, the flux of downgoing (θ_z < 90^∘) and horizontal (θ_z ≈ 90^∘) neutrinos is attenuated appreciably, but is not completely depleted. For illustration, see, , Fig. A2 in  <cit.>,  <cit.>, and Figs. 10 & 11 in  <cit.>. This makes the detection of UHE neutrinos more likely from these directions, provided there is sufficient detector response, which is the case for our modeling of IceCube-Gen2; we elaborate on this in Secs. <ref> and <ref>. We compute the propagation of UHE neutrinos inside the Earth as in  <cit.>, using the sophisticated propagation code NuPropEarth <cit.>. It uses the recent BGR18 neutrino-nucleon DIS cross sections <cit.>, the same ones that we use in Sec. <ref> to compute the rate of detected events. NuPropEarth also accounts for ν_τ regeneration, for energy losses of intermediate leptons during propagation, and for subleading neutrino interactions that, taken together, increase the flux attenuation by approximately an extra 10%. For the density profile of matter inside the Earth, we adopt the Preliminary Reference Earth Model <cit.>, with an added layer of surface ice 3 km thick to represent Antarctica, and account also for the radial change in the chemical composition of underground matter <cit.>. Finally, we model the volume of the neutrino detector—the radio array of IceCube-Gen2—as a cylinder of radius 12.6 km and height 1.50 km, buried vertically 100 m underground at the South Pole; see Fig. 7 in  <cit.>. In summary, given a flux of neutrinos at the surface of the Earth, from DM decay or from the non-DM background neutrino flux, we propagate it across many different directions to the detector. We propagate separately the fluxes of ν_e, ν̅_e, ν_μ, ν̅_μ, ν_τ, and ν̅_τ;. Below, we use their fluxes at the detector, Φ_ν_α^ det and Φ_ν̅_α^ det, to compute neutrino-induced event rates. §.§ UHE neutrino radio-detection at IceCube-Gen2 Reference <cit.> first proposed using the radio emission from UHE particles as a means to detect them. Upon reaching the detector volume, an UHE neutrino may scatter off a nucleon in ice and produce a shower of high-energy particles. As the shower travels, it accumulates an excess of electrons in its front that, after reaching shower maximum, is emitted as an impulsive coherent radio pulse, known as Askaryan radiation <cit.>. For details, see  <cit.>. Because radio travels in ice subject only to mild attenuation, it may be detected using a sparse underground array of radio antennas, which makes it feasible to instrument a large volume that makes up for the potentially tiny fluxes of incoming UHE neutrinos. This is the strategy adopted by the planned radio array of IceCube-Gen2 <cit.>. Of the proposed UHE neutrino telescopes <cit.>, the radio array of IceCube-Gen2 is among the largest and in an advanced stage of planning <cit.>. Thus, we gear our forecasts to it. However, our methods can be readily adapted to other upcoming UHE neutrino telescopes <cit.>. To compute realistic projected event rates at the radio array of IceCube-Gen2, we follow the detailed procedure introduced in  <cit.>, which uses an estimated detector response based on state-of-the-art simulations. This has been used already to forecast the measurement of the UHE neutrino-nucleon cross section <cit.>, the discovery of UHE neutrino point sources <cit.>, and the discovery of the diffuse flux of UHE neutrinos <cit.>. Below, we only sketch the procedure and introduce necessary modifications to it; we defer to  <cit.> for details. Upon reaching the detector volume, after propagating through the Earth (Sec. <ref>), an UHE ν_α of energy E_ν interacts with a nucleon at rest, N, typically via DIS (see above). The ensuing particle shower has an energy E_ sh, a fraction of the parent neutrino energy. For showers initiated by the NC DIS of ν_α or ν̅_α of any flavor, only the final-state hadrons radiate <cit.>, so E_ sh = y E_ν. For showers initiated by the CC DIS of a ν_e or ν̅_e, both the final-state electron and hadrons radiate, so E_ sh = E_ν. For showers initiated by the CC DIS of ν_μ, ν̅_μ, ν_τ, or ν̅_τ, only the final-state hadrons radiate, so E_ sh = y E_ν. As during propagation, at detection the value of the inelasticity follows dσ_ν_α^ NC/dy and dσ_ν_α^ CC/dy, for which we adopt the BGR18 <cit.> calculation; see Fig. 4 in  <cit.>. The detector response is represented by its effective volume, which we treat separately for NC and CC showers, V_ eff, ν_α^ NC and V_ eff, ν_α^ CC. The effective volume depends on the shower energy and on the direction of the incoming neutrino. It is generated by simulating the interaction of neutrinos in the detector volume, followed by the generation of Askaryan radiation, its propagation in ice, including changes in the index of refraction of ice with depth, and its detection in the two types of radio antennas envisioned in the array. For the simulations we use NuRadioReco <cit.> and NuRadioMC <cit.>, the same tools used by the IceCube-Gen2 Collaboration. We adopt the same array design consisting of a combination of shallow and deep radio stations as in  <cit.>. The effective volume is least sensitive around 10^7 GeV, grows with shower energy, and is relatively less sensitive for downgoing neutrinos (cosθ_z ≈ 1); see Fig. 13 in  <cit.>. (Unlike common practice, the detector volume does not contain the effect of the attenuation of the neutrino flux underground. This is contained separately, in Φ_ν_α^ det.) The differential event rate is obtained by convolving the neutrino flux that reaches the detector (Sec. <ref>), Φ^ det_ν_α, the effective volume, and the neutrino-nucleon cross section. For ν_α, after an exposure time T, this is d^2N_ν_α/dE_ sh dcosθ_z dϕ = T n_t ∫_0^1 dy ( . E_ν_α^NC(E_ sh, y)/E_ sh V_ eff, ν_α^ NC(E_ sh, cosθ_z) dσ_ν_α w^ NC(E_ν, y)/dyΦ^ det_ν_α(E_ν,cosθ_z,ϕ) |_E_ν = E_ν_α^ NC(E_ sh, y). . +  NC→ CC) , where dσ_ν_α w^ NC/dy is the cross section for interaction with water, made up of 10 protons and 8 neutrons, and n_t is the number density of water molecules in ice. The event rate due to ν̅_α is the same as spectrum_true, but changing Φ^ det_ν_α→Φ^ det_ν̅_α, dσ_ν_α^ NC/dy → dσ_ν̅_α^ NC/dy, and dσ_ν_α^ NC/dy → dσ_ν̅_α^ CC/dy. At these energies the cross sections for ν_α and ν̅_α are nearly indistinguishable; see  <cit.> and Fig. 3 in  <cit.>. Equation (<ref>) generalizes the original procedure in  <cit.> by allowing the flux and the event rate to vary not only with zenith angle, θ_z, but also with azimuth, ϕ. This allows our analysis to be sensitive to an excess of UHE neutrinos from the decay of DM towards the GC. As in  <cit.>, we smear the event rate using the detector energy and angular resolution, and use for our forecasts the event rate in terms of the reconstructed shower energy, E_ sh^ rec, and reconstructed direction, Ω^ rec, , d^2 N_ν_α/dE^rec_shdΩ^rec = ∫ dE_ sh∫ dΩd^2 N_ν_α(E_ sh, θ_z, ϕ)/dE_shdΩ ×  R_E_sh(E^rec_sh,E_sh) R_Ω(𝐧^rec,𝐧) , where dΩ = sinθ_z dθ_z dϕ and dΩ^ rec = sinθ_z^ rec dθ_z^ rec dϕ^ rec are the real and reconstructed differential solid angles, and 𝐧 and 𝐧^rec are the real and reconstructed shower directions. We model the energy resolution via a Gaussian function in ϵ≡log_10(E_ sh^ rec/E_ sh), , R_E_sh(E^rec_sh,E_sh) = √(2/π)exp[ -(E^rec_sh-E_sh)^2/2σ_E_sh^2]/σ_E_sh[1+Erf(E^rec_sh/√(2)σ_E_sh)] , where σ_E_ sh = 10^σ_ϵ E_ sh. As baseline, we fix σ_ϵ = 0.1, based on simulations performed for UHE neutrino radio-detection at the RNO-G neutrino telescope <cit.>, which we take as representative of IceCube-Gen2, too. We model the angular resolution via a Gaussian function of the angle between true and reconstructed direction, , R_Ω(𝐧^rec,𝐧) = σ^2/2π(1-e^-2/σ^2)exp( 𝐧·𝐧_rec/σ^2) , with a common width of σ_θ_z = σ_ϕ = ≡σ_Ω in zenith and azimuth. As baseline, we fix σ_Ω = 3^∘, similar to what  <cit.> adopted for the zenith-angle resolution. References <cit.> explored the effect of varying the energy and angular resolution on the event rate. To produce our forecasts, we use the all-flavor event rate of ν_α and ν̅_α, , d^2 N_ν/dE_ sh^ rec dΩ^ rec = ∑_α=e,μ,τ( d^2 N_ν_α/dE_ sh^ rec dΩ^ rec + d^2 N_ν̅_α/dE_ sh^ rec dΩ^ rec) . Conservatively, we do not assume that radio-detection at IceCube-Gen2 will be able to distinguish between events initiated by different flavors; however, there is promising ongoing work in this direction <cit.>. §.§ Expected event rates Using the methods above, we compute event rates for the flux of UHE neutrinos from DM decay and for the non-DM background flux of UHE neutrinos. For the former, the event rate depends on the DM mass and lifetime. For the latter, it depends on our choice of background flux. As illustration, below we show event rates for the fluxes in diffuse_fluxes; later, when producing results, we compute event rates for many more cases. Figure <ref> shows the all-sky differential event rate in reconstructed shower energy. The event energy spectra reflect the features of the underlying neutrino energy spectra in diffuse_fluxes, though smoothed out by the detector energy resolution, which complicates distinguishing between them in our forecasts later (see also Sec. <ref>). Further, while the neutrino energy spectrum from DM decay with m_DM = 10^8 GeV and the spectrum of the large non-DM benchmark flux are comparable in diffuse_fluxes, in diff_event_rate the event rate for the former is appreciably smaller than that for the latter. This is because the effective volume falls at low energies, where the spectrum from DM decay peaks; see Sec. <ref>. Figure <ref> shows sky maps of the angular distribution of the energy-integrated event rate, for neutrinos from DM decay and for the background neutrinos. The angular distribution of events is anisotropic, even when it is due to an isotropic neutrino flux, like the background flux. The radio array of IceCube-Gen2 is mostly sensitive to zenith angles between 45^∘ and 90^∘. At larger zenith angles, Earth attenuation strongly reduces the chances of neutrino detection, whereas at smaller zenith angles the effective volume is smaller. For this reason, most of the events come from declinations between -45^∘ and 0^∘. (The two bright zenith bands in the skymaps, easily visible for neutrinos from DM decay, are due to features in the response of the two types of antennas that the radio array is made of; see Fig. 12 in  <cit.>.) The sky maps in sky_map_evrate illustrate the combined effect of the three sources of angular dependence in our calculation: from the neutrino flux itself, from the propagation of neutrinos through the Earth, and from the detector effective volume. The latter two, together with the angular resolution of the detector, smooth out any natural anisotropy in the neutrino flux. Nevertheless, sky_map_evrate shows that the excess of neutrinos from DM decay towards the GC survives into the angular distribution of events, though it is more spread out. The excess is more concentrated for the NFW profile than for the Burkert profile, reflecting their fluxes from sky_map_evrate. § DISCOVERY PROSPECTS FOR DARK MATTER DECAY The decay of heavy DM into UHE neutrinos may be discovered even in the presence of sizable non-DM neutrino backgrounds, by using the angular distribution of detected events, in 10 years of exposure of the radio array of IceCube-Gen2 (discovery_prospects). However, a puffy Galactic DM density profile may weaken the discovery prospects (discovery_prospects_nfw_vs_burkert). Upon discovery, the DM mass and lifetime, and the flux of neutrinos from its decay, may be accurately and precisely measured by using also the energy distribution of events (Figs. <ref> and <ref>). §.§ Overview The distinct angular distribution of UHE neutrinos from DM decay—peaked towards the GC—provides a smoking-gun signature of their origin when compared to the isotropic flux of astrophysical and cosmogenic neutrinos. Yet, so far, the usefulness of this difference has gone underused or ignored in forecasts of searches for DM decay in UHE neutrino telescopes; see, ,  <cit.>. In contrast, our methods embrace it. Unlike previous forecasts, we use this angular difference to not only claim the discovery of UHE neutrinos with a possible origin in DM decay, but to assert their DM origin in the presence of a non-DM neutrino background, , to firmly discover UHE neutrinos from DM decay. However, a sensible DM discovery claim requires a sufficiently large excess towards the GC. Added to that, if there is a large isotropic background of non-DM neutrinos, it could wash out the excess of neutrinos from DM decay, weakening the discovery claim. In our forecasts below, we quantify the above statements in two ways. First (Sec. <ref>), we find the values of the mass and lifetime of DM needed to discover UHE neutrinos from its decay, in the presence of a non-DM neutrino background. For this, we use only the angular distribution of detected events in the radio array of IceCube-Gen2. Second (Sec. <ref>), in the event of discovery, we illustrate the accuracy with which the DM mass and lifetime could be measured. For this, we use the joint angular and energy distribution of events. §.§ Discovery prospects We forecast the regions of DM mass and lifetime where UHE neutrinos from DM decay could be discovered. Figure <ref> (also discovery_prospects_3sigma) shows our results. §.§.§ Statistical methods We produce discovery forecasts by analyzing projected samples of detected events. To be conservative, we use only their angular distribution, summed over all energies, since it is in it that the critical difference between the flux of neutrinos from DM decay and the non-DM neutrino background manifests (Sec. <ref>). Later (Sec. <ref>), we derive upper limits using also their energy distribution. We build our forecasts using the maximum likelihood technique and report mean discovery prospects based on Asimov data samples. Each event sample is the sum of events due to neutrinos from DM decay, dependent on the DM mass, m_DM, and lifetime, τ_DM, and events from the background of non-DM neutrinos, rescaled by a flux normalization, 𝒩_Φ, , dN_ν(ϑ)/dΩ^rec = dN^DM_ν(m_DM, τ_DM)/dΩ^rec + 𝒩_ΦdN^bg_ν/dΩ^rec , where ϑ≡{m_DM, τ_DM, 𝒩_Φ}. For the non-DM background, we show forecasts obtained under the three benchmark scenarios presented in Sec. <ref>: a large flux set to the cosmogenic flux by Bergman & van Vliet <cit.>, a medium flux that is 10% of that, and a null background; see diffuse_fluxes. The large and medium benchmark backgrounds have the same angular distribution of events (see sky_map_evrate and also Fig. 4 in  <cit.>); they only differ in the total number of events (diff_event_rate). Based on a projected event sample, we compare two hypotheses: the DM hypothesis, where the angular distribution of events is best explained by the presence of a DM decay component on top of a background non-DM component vs. the null hypothesis, where it is best explained by the presence of only the background non-DM component. In both cases, the background flux normalization, 𝒩_Φ, is left free to vary to best fit the data. Hence, when analyzing a simulated event sample, we obtain results that do not require prior knowledge of the true background flux. We do this separately for each of the three above choices of the simulated background flux, and for a wide range of true values of the DM and lifetime. First, for a particular choice of the true DM mass and lifetime, m_ DM and τ_ DM, and using the true value of the flux normalization, 𝒩_Φ = 1, we compute the projected observed sample of N_ evts events, each with reconstructed direction Ω_i^ rec sampled from the distribution dN_ν(m_ DM, τ_ DM, 𝒩_Φ = 1) / dΩ^ rec. Later, we use a test statistic that is averaged over all possible random realizations of the number of events and of the distribution of reconstructed directions of the events. Then, based on this observed sample, we evaluate an unbinned likelihood function at different test values of the model parameters, ϑ^'≡ (m_ DM^', τ_ DM^', 𝒩_Φ^'), , ℒ (ϑ^'; {Ω_i^ rec}) = e^-N_ν(ϑ^')∏_i=1^N_ evts. dN_ν(ϑ^')/dΩ^rec|_Ω^rec_i , where N_ν(ϑ^') ≡∫ (dN_ν(ϑ^') / dΩ^rec) dΩ^rec is the all-sky event rate. In the comparison, we let the test values of m_ DM^', τ_ DM^', and 𝒩_Φ^' float as free parameters, as they would in a test based on real experimental data. Under the null hypothesis, where there is no DM decay contribution because DM is stable (, τ_ DM→∞), the likelihood reduces to ℒ_ bg(𝒩_Φ^'; {Ω_i^ rec}) ≡lim_τ_ DM^'→∞ℒ(ϑ^'; {Ω_i^ rec}) , where the right-hand side no longer depends on the test values m_ DM^' and τ_ DM^'. For a specific choice of the true values of m_ DM and τ_ DM, the test statistic depends on the angular distribution of the associated random observed event sample. To account for the possible different realizations of the observed sample, we average the logarithm of the likelihood functions, Eqs. (<ref>) and (<ref>), over all possible realizations. The probability to observe a total number of N_ evts, over the full sky, 𝒫(N_ evts| N_ν), is given by a Poisson distribution with a mean value equal to the mean all-sky event rate, N_ν≡ N_ν(m_ DM, τ_ DM, 𝒩_Φ = 1). The probability to sample an event with reconstructed direction Ω_i^ rec from this distribution is 𝒫(Ω_i^ rec) ≡ (1/N_ν) (dN_ν/dΩ^ rec)|_Ω_i^ rec. Thus, the likelihood function, likelihood_dm, averaged over all possible realizations of the event sample, is ⟨lnℒ (ϑ^') ⟩_m_ DM, τ_ DM = ∑_N_ evts=0^∞𝒫[ N_ evts| N_ν(m_ DM, τ_ DM, 𝒩_Φ=1) ] ×∫ dΩ_1^ rec⋯∫ dΩ_N_ evts^ rec∏_i=1^N_ evts𝒫(Ω_i^ rec| m_ DM, τ_ DM, 𝒩_Φ=1) lnℒ[ ϑ^', {Ω_j^ rec}_j=1^N_ evts] , and, similarly, the average log-likelihood under the null hypothesis, in which only background non-DM neutrinos are present, likelihood_bg, is ⟨lnℒ_ bg(𝒩_Φ^') ⟩_m_ DM, τ_ DM≡lim_τ_ DM^'→∞⟨lnℒ(ϑ^') ⟩_m_ DM, τ_ DM , We average the log-likelihood, rather than the likelihood, to prevent the averaging procedure from prescribing exceedingly large averaging weights to random realizations that have associated large likelihood values. This corresponds to obtaining the results for an Asimov data sample <cit.>, in which the observed distribution of events exactly coincide with the expected one. To compare the two hypotheses, we use as a test statistic the average log-likelihood ratio, , ⟨Λ(m_ DM, τ_ DM) ⟩ = min_𝒩_Φ^'[ -2 ⟨lnℒ_bg(𝒩_Φ^') ⟩_ m_ DM, τ_ DM] - min_ϑ^'[ -2 ⟨lnℒ(ϑ^') ⟩_ m_ DM, τ_ DM] . According to Wilks' theorem <cit.>, in the asymptotic limit of a large data sample, this quantity follows a χ^2 distribution with two degrees of freedom, corresponding to the difference between the dimensions of the parameter spaces of the two competing hypotheses. We adopt it in our forecasts since, for 10 and 20 years of detector exposure, and for the neutrino fluxes that we adopt, they are based on a large number of events. Hence, below, when ⟨Λ⟩ > 6, we claim discovery of DM neutrinos at the 2σ confidence level (C.L.); when ⟨Λ⟩ > 11.5, we claim it at 3σ C.L. §.§.§ Results Figure <ref> shows the regions of DM mass and lifetime, obtained with the above methods, where DM can be discovered at ≥ 2σ. We show results for our three benchmark choices of non-DM UHE neutrino background flux (Sec. <ref>)—null, medium, and large. The results in discovery_prospects convey three key messages. First, while a large part of the parameter space in discovery_prospects is already disfavored by present-day neutrino and gamma-ray searches, upcoming UHE neutrino telescopes will extend the search to longer DM lifetimes, roughly above 10^29 s, that are unreachable with present-day experiments. This aspect of our results agrees with previous works; see, ,  <cit.>). Yet, for the first time, we fortify the claim by showing that it holds even in the presence of a medium-sized isotropic background flux of non-DM UHE neutrinos. Second, discovery_prospects shows the significant difference between detecting events, which a priori may or may not be due to DM decay, and claiming that those events are produced by DM decay. Earlier forecasts of UHE neutrino detection from DM decay <cit.> had only investigated the maximum lifetime needed to detect neutrinos from DM decay, without identifying their origin. We show a version of these forecasts in discovery_prospects, generated by demanding that DM decay yields at least one event over the full sky, background-free, with a probability larger than 95% (or 99.7% in discovery_prospects_3sigma). As expected, this weaker criterion leads to overly long lifetimes being discoverable: compared to our results using the angular distribution of events, lifetimes longer by at least one order of magnitude could be detected without actually leading to a DM discovery. Thus, hereafter the main observations and conclusions of our work are based exclusively on forecasts made using the angular—and energy (in Secs. <ref> and <ref>)—distribution of events. Third, discovery_prospects shows that, while the discovery prospects are best when background-free, as expected, the presence of a medium-size background only degrades the reach of discoverable DM lifetimes by a factor of about 2. In other words, UHE neutrinos retain the potential to reveal DM decay even in the presence of a sizable isotropic background of non-DM origin. This is true also for discovery at 3σ; see discovery_prospects_3sigma. Figure <ref> shows that, however, our prospects for discovery of DM decay are contingent on the Galactic DM density profile being cuspy, , markedly pronounced towards the GC. Swapping the cuspy NFW profile for the puffy Burkert profile reduces the region amenable for discovery by about one order of magnitude, pushing it into the region of DM mass and lifetime that is already disfavored, and rendering discovery all but unfeasible. A subtle point is that the present-day disfavored region shown in discovery_prospects_nfw_vs_burkert was computed, in  <cit.>, assuming the NFW profile. Assuming the Burkert instead would push down the disfavored region, leaving slightly more room for discovery under the Burkert profile; we do not attempt this recalculation here. Yet, the bottom line holds: for realistic choices of the detector angular resolution, like the σ_Ω = 3^∘ that we adopt, the discovery of DM decay into UHE neutrinos will be likely only if the Galactic DM density profile is cuspy. §.§ Measuring dark matter mass and lifetime In the event of the discovery of DM decay into UHE neutrinos, we forecast how well the DM mass and lifetime could be inferred. Figure <ref> shows our results. §.§.§ Statistical methods Above (Sec. <ref>), we showed that to claim the discovery of DM decay into UHE neutrinos it was enough to use the angular distribution of events. In the event of discovery, inferring the DM lifetime also relies mainly on the angular distribution events; concretely, as before, on its excess towards the GC. Because the DM lifetime determines the normalization of the neutrino flux from DM decay, its value can be inferred from the magnitude of the excess. However, inferring the DM mass requires the energy distribution of events, too. Using it allows us to infer the DM mass by looking for the distinct bump-like feature imprinted by DM decay on the neutrino spectrum, which peaks at E_ν = m_ DM/2; see Figs. <ref> and <ref>. Thus, below, we extend the statistical methods from Sec. <ref> to include also the energy of the detected events. First, we generate the true event sample, , the one that we assume will be detected. We use a procedure similar to the one we used to compute discovery prospects (Sec. <ref>), but extended to included also the energy distribution of events. To illustrate our method, we choose m_ DM=3.5× 10^9 GeV and τ_ DM=1.19× 10^29 s as true values; these are representative of the discoverable region under our benchmark medium non-DM in discovery_prospects. We use for the non-DM background UHE neutrino flux the same three benchmarks (Sec. <ref>) that we used earlier to make discovery forecasts—null, medium, and large. In analogy to generation_event_rate, the true distribution is dN_ν^ true(m_ DM, τ_ DM)/dE_ sh^ rec dΩ^rec = dN^DM_ν(m_DM, τ_DM)/dE_ sh^ rec dΩ^rec + dN^bg_ν/dE_ sh^ rec dΩ^rec . From it, we randomly sample N_ evts events, each with reconstructed energy E_ sh, i^ rec and direction Ω_i^ rec. Later, for each choice of m_ DM and τ_ DM, we average our test statistic over all possible random realizations of the event samples. Then we compare the true event sample to test event samples, generated for many different test values of DM mass and lifetime, in order to find which ones fit best. To produce test event samples, we generalize what we did to compute discovery prospects and adopt a generic model of the non-DM background neutrino energy spectrum. We parametrize it as a piecewise (pw) spectrum ∝ E_ν^-2, with three independent normalization constants, 𝒩_Φ, 1, 𝒩_Φ, 2, 𝒩_Φ, 3, in three decades of neutrino energy, from 10^7 GeV to 10^10 GeV. For ν_α, this is Φ_ν_α^ bg-pw(E_ν) = f_α, ⊕/2 E_ν^2×{[ 𝒩_Φ, 1, 10^7 ≤ E_ν/ GeV < 10^8; 𝒩_Φ, 2, 10^8 ≤ E_ν/ GeV < 10^9; 𝒩_Φ, 3, 10^9 ≤ E_ν/ GeV≤ 10^10; ]. , where, for the flavor composition at Earth, we adopt the one from the canonical expectation of neutrino production via the pion decay chain (Sec. <ref>), computed using recent best-fit values of the neutrino mixing parameters <cit.>, f_e,⊕ = 0.298, f_μ,⊕ = 0.359, and f_τ,⊕ = 0.342, from  <cit.>, , close to flavor equipartition (see also, ,  <cit.>). Since we make forecasts for 10–20 years, when the values of the mixing parameters will likely be known precisely <cit.>, we neglect the small uncertainties in these predictions; see Eqs. (7)–(9) in  <cit.> . The fluxes of ν_α and ν̅_α are identical; this is ensured by the factor of 2 in the denominator of flux_bg_pw. We adopt this background flux model to analyze projected event samples using a phenomenological prescription of the non-DM neutrino background that is as agnostic as possible regarding the shape of its energy spectrum and its origin. Our strategy resembles one that would be used by future analyses based on real experimental observations. Indeed, similar flux models are used by the IceCube Collaboration to analyze present-day data <cit.>. Using the piecewise background flux, we compute the associated differential event spectrum, dN_ν^ bg-pw/dE_ sh^ recdΩ^ rec, using the methods from Sec. <ref>. Thus, the total differential test event spectrum is dN_ν^ test(ϑ^')/dE^rec_shdΩ^rec = dN^DM_ν(m_DM^',τ_DM^')/dE^rec_shdΩ^rec + dN_ν^bg-pw(𝒩_Φ, 1^', 𝒩_Φ, 2^', 𝒩_Φ, 3^')/dE^rec_sh dΩ^rec , where now ϑ^'≡ (m_ DM^', τ_ DM^', 𝒩_Φ, 1^', 𝒩_Φ, 2^', 𝒩_Φ, 3^'). In analogy to likelihood_dm, we evaluate an unbinned likelihood function at different test values of the model parameters, ϑ^', , ℒ_ mea(ϑ^', { E_ sh, i^ rec, Ω_i^ rec}) = e^-N_ν^ test(ϑ^') ×∏_i=1^N_evts. dN_ν^ test(ϑ^')/dE^rec_shdΩ^rec|_E_ sh, i^ rec, Ω_i^ rec , where N_ν^ test is the all-sky number of events with energies E_ sh^ rec≥ 10^7 GeV. As before (Sec. <ref>), for a specific choice of the true values of m_DM and τ_DM and of the non-DM background neutrino flux, we average the logarithm of the above likelihood over all possibly random realizations of the observed event sample. In analogy to likelihood_dm_avg, this yields ⟨lnℒ_ mea(ϑ^') ⟩_m_ DM, τ_ DM. Finally, to infer the values of the DM mass and lifetime, we profile the likelihood over the test parameters and, in analogy to test_statistic, define the test statistic ⟨Λ_ mea(m_DM,τ_DM)⟩ = min_𝒩_Φ, 1^', 𝒩_Φ, 2^', 𝒩_Φ, 3^'[ -2 ⟨lnℒ_ mea ( m_ DM^' = m_ DM, τ_ DM^' = τ_ DM, 𝒩_Φ, 1^', 𝒩_Φ, 2^', 𝒩_Φ, 3^' ) ⟩_m_ DM, τ_ DM] -  min_ϑ^'[ -2 ⟨lnℒ_ mea (ϑ^') ⟩_m_ DM, τ_ DM] . As in Sec. <ref>, based on Wilks' theorem, this test statistic follows a χ^2 distribution with two degrees of freedom. Below, we use it to infer allowed regions of m_DM and τ_DM at difference confidence levels. §.§.§ Results Figure <ref> illustrates that, in the event of discovery of DM decay into UHE neutrinos, IceCube-Gen2 may infer the values of the DM mass and lifetime responsible for the discovered signal. In reconstructed, we show results obtained using our illustrative choice for the true DM and mass lifetime (see above). To illustrate the influence of a non-DM isotropic background flux of UHE neutrinos, we compare results obtained assuming no background vs. assuming our medium benchmark background (Sec. <ref>). Our results confirm our expectation (Sec. <ref>) that using the energy distribution of events grants us sensitivity to the DM parameters. For the choice of DM mass and lifetime in reconstructed, and similar ones, their values can be inferred with an accuracy of a factor of 2–3. In the presence of the medium background, the accuracy degrades only slightly compared to the null-background case. A larger background degrades the accuracy further, but does not preclude measurement; it may, however, reduce the precision with which the DM mass is inferred (see below). While the accuracy on the DM mass is only weakly degraded by the presence of a background, the precision on it suffers more appreciably. In the absence of a background, the best-fit value of the DM mass matches its true value. In the presence of a medium-size background, its best-fit value is offset from the real one. This stems from our choice of the three-piece background flux, flux_bg_pw, to analyze projected event samples. On the one hand, this flux prescription frees us from having to rely on specific theoretical predictions of the UHE astrophysical or cosmogenic neutrino flux. On the other hand, because it is rather coarse—with decade-wide flux normalization constants—it fails to reproduce closely the shape of the background neutrino energy spectrum, diffuse_fluxes. Even so, the mismatch between the best-fit and true values is small; they are consistent within 1σ. Future analyses could mitigate this loss of precision by adopting a more finely binned version of the piecewise background flux. Thus, the DM mass and lifetime can be accurately inferred, even in the presence of a non-DM isotropic background flux of UHE neutrinos, by analyzing jointly the angular and energy distribution of events. In doing so, there is essentially no degeneracy between the flux of UHE neutrinos from DM decay and the unknown background flux of UHE astrophysical and cosmogenic neutrinos, since in our procedure the former is determined almost exclusively by neutrinos from the GC, while the latter is determined by neutrinos from every direction. Figure <ref> shows the corresponding allowed regions of the sky-averaged diffuse flux of UHE neutrinos from DM. The approximate factor-of-2 uncertainty on the DM lifetime translates into an uncertainty of similar size on the flux normalization. The mismatch between the best-fit and true vales of the DM mass translates into a mismatch between the low-energy tails of secondary neutrinos from electroweak corrections in their corresponding fluxes. Figure <ref> shows also that our piecewise background flux model, flux_bg_pw, is able to match the true background flux reasonably well, within the limitations of its coarse shape, except in the lowest energy bin, where the match is poor because due to the background flux dipping well below the sensitivity of the radio array of IceCube-Gen2, thus leading to low event rates. There, results could be improved by a combined analyses of TeV–PeV and UHE neutrinos detected, respectively, by the optical and radio arrays of IceCube-Gen2 <cit.>. § PROJECTED BOUNDS ON DARK MATTER DECAY The lifetime of heavy DM that decays into UHE neutrinos may be bound even in the presence of sizable non-DM neutrino backgrounds, by using the joint angular and energy distribution of detected events, in 10 years of exposure of the radio array of IceCube-Gen2 (bounds_medium_bg). Even when using the largest possible allowed background, the bounds on the DM lifetime remain competitive or better than present-day bounds (bounds). §.§ Overview Absent evidence for UHE neutrinos from DM decay, it may still be possible to place competitive bounds on the DM lifetime, even in the presence of a sizable non-DM isotropic UHE neutrino background flux. Like for the discovery of DM decay (Sec. <ref>), below we gear our results for radio-detection at IceCube-Gen2, but our methods and, broadly stated, our conclusions are applicable to next-generation UHE neutrino telescopes in general. References <cit.> reported projected bounds for DM decay into UHE neutrinos, including via their radio-detection at IceCube-Gen2. We improve on those in two ways. First, we use a significantly more detailed calculation of event rates in IceCube-Gen2, based on state-of-the-art simulations of neutrino propagation, interaction, and radio-detection (Sec. <ref>). Like for the discovery of DM decay, this is key to generating reliable angular and energy event distributions, which our analysis uses to discriminate against the background. Second, unlike previous works, we forecast bounds in the presence of a sizable non-DM isotropic UHE neutrino background flux. Reference <cit.> did consider the presence of a potential background, but discriminated against it simply by counting only neutrinos with energy smaller than m_DM / 2. In contrast, we use a full-fledged angular and energy analysis to produce our bounds. §.§ Statistical methods Unlike our earlier analyses to discover DM decay and infer the DM mass and lifetime (Sec. <ref>), to place bounds on DM decay we assume that the true, observed event distributions are due solely to the non-DM isotropic background flux of UHE neutrinos, and we contrast test event distributions expected from DM decay against it. For the non-DM background flux, we use our two medium and large benchmark fluxes (Sec. <ref>), based off of the cosmogenic neutrino flux by Bergman & van Vliet <cit.>. Our main conclusions hold for other choices of background, and we point out below what features of our results are due to our specific benchmark choices. Further, unlike when computing discovery prospects (Sec. <ref>), when setting bounds below we use not only the angular distribution of events, but also their energy distribution. We demand that the energy distribution of events lacks features that are characteristic of the energy spectrum of neutrinos from DM decay, , a clustering of events with similar energies that reflects an underlying bump-like shape of the spectrum (Sec. <ref>). Admittedly, this is a broad, conservative criterion: it discriminates against the flux of neutrinos from DM decay, but also against any astrophysical or cosmogenic flux that has a bump-like feature in its spectrum, of which there are many proposals, including our benchmark background fluxes; see, , Fig. 2 in  <cit.>. For a given choice of the non-DM neutrino background, we compute the true, observed event distribution, dN_ν^ true/dE^rec_shdΩ^rec = dN_ν^ bg/dE^rec_shdΩ^rec , using the methods from Sec. <ref>. From it, we sample random realizations of the observed event sample, consisting of a random number N_ evts of events, each with reconstructed energy and direction, E_ sh, i^ rec and Ω_i^ rec. Then, for a choice of the DM mass m_DM and lifetime τ_DM, we compare the true event rate vs. the test event rate expected from DM decay, dN_ν^ test(ϑ^') / dE^rec_sh dΩ^rec, given by test_event_rate_measure evaluated at test parameters ϑ^'≡ (m_ DM^' = m_ DM, τ_ DM^' = τ_ DM, 𝒩_Φ, 1^', 𝒩_Φ, 2^', 𝒩_Φ, 3^'). This test event rate is computed using the same piecewise background UHE neutrino spectrum, flux_bg_pw, that we used to infer the DM mass and lifetime (Sec. <ref>). Like when computing discovery forecasts, we compare the true (, background-only) and test (, background plus DM) hypotheses via an unbinned likelihood, given by likelihood_mea, , ℒ_ full(ϑ^', { E_ sh, i^ rec, Ω_i^ rec}) = e^-N_ν^ test(ϑ^') ×∏_i=1^N_evts. dN_ν^ test(ϑ^')/dE^rec_shdΩ^rec|_E_ sh, i^ rec, Ω_i^ rec , which relies on the full available information on the events, , their joint angular and energy distribution. In addition, to highlight the role of the angular and energy information in placing bounds, we compute separately analyses that use limited information. An analysis that relies only on angular information uses a likelihood where the event rate is integrated across all reconstructed energies from 10^7 GeV to 10^10 GeV, , ℒ_ ang(ϑ^', {Ω_i^ rec}) = e^-N_ν^ test(ϑ^') ×∏_i=1^N_evts. ( ∫ dE_ sh^ recdN_ν^ test(ϑ^')/dE^rec_shdΩ^rec) |_Ω_i^ rec . An analysis that relies only on energy information uses a likelihood where the event rate is all-sky, , ℒ_ en(ϑ^', { E_ sh, i^ rec}) = e^-N_ν^ test(ϑ^') ×∏_i=1^N_evts. ( ∫ dΩ^ recdN_ν^ test(ϑ^')/dE^rec_shdΩ^rec) |_E_ sh, i^ rec . And, finally, an analysis that relies only on the all-sky number of events of all energies uses ℒ_ count(ϑ^') = e^-N_ν^ test(ϑ^')[ N_ν^ test(ϑ^') ]^N_ evts . Like for the DM discovery prospects before, we compute projected bounds on the DM mass and lifetime using an Asimov event sample. In analogy to likelihood_dm_avg, we average the above likelihood functions over all possible random realizations of the observed events, sampled from the underlying event distribution due to the non-DM background, dN_ν^ true / dE^rec_sh dΩ^rec. This yields the average functions ⟨lnℒ_ full⟩, ⟨lnℒ_ ang⟩, ⟨lnℒ_ en⟩, and ⟨lnℒ_ count⟩. In analogy to likelihood_avg_bg, we define likelihood functions computed under the background-only hypothesis, , ⟨lnℒ_ full, bg( 𝒩_Φ, 1^', 𝒩_Φ, 2^', 𝒩_Φ, 3^') ⟩ = lim_τ_ DM^'→∞⟨lnℒ_ full( m_ DM^', τ_ DM^', 𝒩_Φ, 1^', 𝒩_Φ, 2^', 𝒩_Φ, 3^') ⟩ , where the right-hand side no longer depends on the DM mass and lifetime. Similar expressions apply for the other likelihood functions, , ⟨ℒ_ ang, bg⟩, ⟨ℒ_ en, bg⟩, ⟨ℒ_ count, bg⟩. To place bounds, we follow  <cit.> and define a test statistic that compares the true hypothesis—that there is no DM neutrino flux—and test hypothesis—that there is a DM neutrino flux with parameters m_DM and τ_DM. , for the full analysis, ⟨Λ_ full (m_ DM, τ_ DM) ⟩ = -2  min_𝒩_Φ, 1^', 𝒩_Φ, 2^', 𝒩_Φ, 3^'[ ⟨lnℒ_ full( m_ DM, τ_ DM, 𝒩_Φ, 1^', 𝒩_Φ, 2^', 𝒩_Φ, 3^') ⟩ - ⟨lnℒ_ full,bg( 𝒩_Φ, 1^', 𝒩_Φ, 2^', 𝒩_Φ, 3^') ⟩] × Θ[τ̂_DM-τ_DM] , where Θ is the Heaviside function and τ̂_DM(m_DM) is the value of the DM lifetime that, for a fixed value of the DM mass, m_DM, maximizes the likelihood function ⟨lnℒ_ full⟩. Similar expressions apply for the other analyses, , ⟨Λ_ ang⟩, ⟨Λ_ en⟩, ⟨Λ_ count⟩. With this definition, under the null hypothesis where neutrinos from DM decay exist, the test statistic should be distributed according to a half-χ^2 distribution with one degree of freedom. Hence, below, we place limits on the DM lifetime at the 2σ C.L. when ⟨Λ_ full⟩ > 2.7, and similarly for the other analyses. §.§ Results Figure <ref> shows the resulting projected bounds on the DM lifetime, obtained by adopting our benchmark medium non-DM UHE neutrino background. We extract two main observations from it. First, bounds_medium_bg shows that the existence of a sizable non-DM neutrino background appreciably weakens the bounds, compared to those obtained by plainly demanding that no UHE neutrino is detected, which are representative of most previous analyses in the literature. (In bounds_medium_bg, the null-detection curve also corresponds to a value of the test statistic of 2.7, which implies a mean number of detected events of 1.35.) Blatantly, when using only a counting analysis, the bounds that we obtain are up to 40 times weaker than bounds obtained from demanding that no neutrino is detected. In reality, how much the bounds are weakened will depend on the actual size and shape of the non-DM background. Still, our results serve as a reminder that projected bounds on the DM lifetime reported in the literature may be optimistic. Second, bounds_medium_bg shows that using the angular and energy event distributions mitigates how much the bounds are weakened. Depending on the DM mass, they improve the bounds compared to the counting analysis by a factor of 2–10. This holds even when adopting our benchmark large background instead; see bounds_large_bg. Overall, using the angular and energy distributions allows projected bounds to remain competitive with present-day ones. The angular and energy information complement each other. Using the energy distribution strongly improves the bounds at low and high DM masses. In the intermediate region, between 10^9 GeV and 10^10 GeV, bounds that use energy information weaken because this is where our benchmark background neutrino spectrum peaks (diffuse_fluxes) and where it may be misconstrued as being due to DM decay; see the discussion in Sec. <ref>. This is counteracted by using angular information: in the 10^9–10^10 GeV range, where the detector response is largest (see Fig. 13 in  <cit.>), the isotropic neutrino background induces a number of events large enough for the analysis to reject an excess towards the GC from DM decay. Figure <ref> shows the result of this interplay for our particular choice of non-DM background; in reality, the specifics will depend on the actual size and shape of the background. Figure <ref> shows that using our large benchmark non-DM UHE neutrino background instead—ten times larger than the medium one—weakens the bounds by only a factor of roughly 2, except in the range 10^9–10^10 GeV, where the bounds weaken by a factor of up to 6, but even so remain roughly competitive with present-day ones. This represents promising prospects: the background flux we use here <cit.> is as large as allowed by the present-day IceCube <cit.> and Auger <cit.> upper limits. Yet, even with this aggressive choice of background, our projected bounds in bounds remain comparable or better than present-day ones. Figure <ref> shows that, naturally, the bounds degrade when using the Burkert Galactic DM profile instead of the NFW profile. Because the Burkert profile is puffier, the bounds derived from the analysis of the angular distribution of events are weakened, so most of the limit-setting power comes from the energy distribution of events instead; see also bounds_burkert. Still, the energy analysis sets bounds using the Burkert profile that are only a factor-of-2 worse than using the NFW profile. Thus, given our extant imperfect knowledge of the Galactic DM profile—in particular, given the possibility of a puffy DM profile—leveraging the interplay between energy and angular information is key. § SUMMARY AND OUTLOOK In the next decade, ultra-high-energy (UHE) neutrino telescopes, presently in planning, will deliver a new way to look for heavy dark matter (DM), with masses in excess of 10^7 GeV, via its decay into UHE neutrinos, with energies in excess of 10^7 GeV. To properly harness this potential, it is critical to disentangle the signatures of UHE neutrinos of DM decay origin from the signatures of UHE neutrinos of astrophysical and cosmogenic origin—long-sought but still undiscovered—that act as a background to DM searches. Failure to do so may incur in steep misrepresentation when claiming discovery of DM decay, inferring the DM mass and lifetime in the event of discovery, or setting bounds on the DM mass and lifetime otherwise. The task is complicated by the fact that the size and shape of the non-DM neutrino background is unknown, that the number of detected events may be small, and that the direction- and energy-measurement capabilities of the detectors are limited. Even so, we have shown, by means of detailed forecasts, that these obstacles are surmountable. Key to that is to examine the energy and angular distributions of the detected UHE neutrinos. They grant us access to the essential differences between the diffuse neutrino fluxes from DM decay and from the non-DM background: in energy, the former is concentrated around the DM mass, while the latter is more spread out (diffuse_fluxes), and, in direction, the former is concentrated around the Galactic Center (GC)—where DM is abundant—while the latter is isotropic (sky_map_flux). In our forecasts, we look for these differences in projected observations. We have geared our forecasts to the radio-detection of UHE neutrinos in the envisioned IceCube-Gen2 neutrino telescope, which we simulate using state-of-the-art methods, including experimental nuance that dull the above differences between the fluxes. Our findings are promising: these differences survive an analysis under realistic experimental conditions (Figs. <ref>, <ref>). Therefore, while the existence of a non-DM UHE neutrino background, even the largest presently allowed, weakens claims of discovery of DM decay or bounds on it, it does not necessarily preclude them. Still, the limit-setting potential and, particularly, the discovery potential, are contingent to the Galactic DM profile peaking markedly towards the GC (Figs. <ref> and <ref>). Regarding the discovery of DM decay, we have shown that DM with mass between 10^8 GeV and 10^10 GeV and lifetime of roughly 10^29 s should be discoverable after 10 years of operation of the radio array of IceCube-Gen2, even in the presence of a medium-sized non-DM neutrino background that yields about 3 events per year (discovery_prospects). This is conservatively achieved using only the angular distribution of detected events. Under a larger background, of about 33 events per year, discovery becomes unfeasible in the face of existing bounds on the DM lifetime. Our discovery forecasts depend only mildly on the shape of the energy spectrum of the non-DM neutrino background—whose size we let float in our analyses—and depend mainly on the total number of events. In the event of discovery, the DM mass and lifetime could be measured with reasonable accuracy and precision (reconstructed), depending on their true values, and the flux of UHE neutrinos from DM decay could be similarly inferred (flux_reconstructed). Importantly, this result is robust: when inferring the values of the DM parameters—and also when setting bounds on them (see below)—we analyze the simulated event samples without assuming knowledge of the shape and size of the energy spectrum of the non-DM neutrino background. If discovery is not possible, we will be able to place lower limits on the DM lifetime, which we forecast. Using the joint angular and energy distribution of events allows us to constrain the presence of bump-like energy spectra and excesses towards the Galactic Center even under challenging, but plausible scenarios where the energy spectrum of the non-DM neutrino background is medium-sized and whose bump-like energy spectrum inconveniently resembles that of neutrinos from DM decay (bounds_medium_bg). Even in the presence of a large bump-like non-DM background, using the energy distribution of events safeguards the limits on the DM lifetime for DM masses sufficiently far from the peak of the non-DM flux (bounds). Overall, we forecast lower limits on the DM lifetime that are comparable to, or better than, existing limits from gamma rays and TeV–PeV neutrinos. While our forecasts are geared to the detection of UHE neutrinos in the radio array of IceCube-Gen2, our conclusions apply generally to planned neutrino telescopes of comparable size, radio-based and otherwise, and our methods can be readily adapted to them. In particular, detectors with an envisioned high angular resolution, like GRAND, could be better at discovering or discriminating against an excess of events towards the GC <cit.>. It seems that, fortunately, the potential of next-generation UHE neutrino telescopes to probe heavy DM decay may be safeguarded against sizable unknown neutrino backgrounds. § ACKNOWLEDGEMENTS We thank Marco Chianese and Christian Glaser for useful discussions and Chengchao Yuan for comments on the manuscript. DF, MB, and VBV are supported by the Villum Fonden under project no. 29388. This work used resources provided by the High Performance Computing Center at the University of Copenhagen. This project has received funding from the European Union’s Horizon 2020 research and innovation program under the Marie Sklodowska-Curie Grant Agreement No. 847523 ‘INTERACTIONS’. § ADDITIONAL FIGURES Without further ado, we include additional figures to complement those in the main text: Figure <ref> Discovery prospects at 3σ, for the NFW Galactic DM profile. Figure <ref> Lower limits on DM lifetime under a large background of non-DM UHE neutrinos, for the NFW profile. Figure <ref> Discovery prospects at ≥ 2σ and ≥ 3σ, for the Burkert Galactic DM profile. Figure <ref> Lower limits on DM lifetime, for the Burkert Galactic DM profile.
http://arxiv.org/abs/2307.00212v1
20230701033055
Internal-External Boundary Attention Fusion for Glass Surface Segmentation
[ "Dongshen Han", "Seungkyu Lee" ]
cs.CV
[ "cs.CV" ]
Internal-External Boundary Attention Fusion for Glass Surface Segmentation Dongshen Han^1 Seungkyu Lee^1 ^1Kyunghee University {han-0129, seungkyu}@khu.ac.kr Received October 2007. Revised February 2008. Accepted March 2008. ======================================================================================================== Glass surfaces of transparent objects and mirrors are not able to be uniquely and explicitly characterized by their visual appearances because they contain the visual appearance of other reflected or transmitted surfaces as well. Detecting glass regions from a single-color image is a challenging task. Recent deep-learning approaches have paid attention to the description of glass surface boundary where the transition of visual appearances between glass and non-glass surfaces are observed. In this work, we analytically investigate how glass surface boundary helps to characterize glass objects. Inspired by prior semantic segmentation approaches with challenging image types such as X-ray or CT scans, we propose separated internal-external boundary attention modules that individually learn and selectively integrate visual characteristics of the inside and outside region of glass surface from a single color image. Our proposed method is evaluated on six public benchmarks comparing with state-of-the-art methods showing promising results. § INTRODUCTION Transparent objects and mirror surfaces are everywhere around such as windows, bottles, eye glasses and omnipresent mirrors. They are expected to be detected, localized and reconstructed from color images in computer vision tasks as other opaque objects are done. However, glass surface is not able to be uniquely characterized by their visual appearances. Most transparent surface region show visual appearance of both transmitted background scene and reflected objects. Visual appearance observed from mirror surface is entirely from reflected objects. Therefore, visual characteristics of glass object are not able to be uniquely defined and observed. This makes performance limitations of computer vision methods in practical many applications. For example, autonomous mobile robot may collide with transparent front door or mirror wall. Robot arm struggles to grip a transparent bottle. Semantic segmentation methods using convolutional neural networks <cit.> have tried to identify visual texture and corresponding surface material type of objects <cit.> <cit.> <cit.> <cit.> <cit.>. Long  <cit.> transpose feature extraction layers to a segmentation task and update parameters by fine tuning. At the same time, it designs a novel structure that combines low-level and high-level semantics, which is extensively utilized by following semantic segmentation networks such as <cit.> . Lin  <cit.> integrate the features of encoder and decoder phases and utilizes chained residual pooling in the module to capture contextual information over a wide range of contexts. Non-local operators <cit.> based on a self-attention mechanism <cit.> are applied in semantic segmentation <cit.><cit.><cit.>. Song  <cit.> propose fully attentional blocks to encode spatial and channel attentions in similarity graphs with high computational efficiency. Most of these methods, however, fail to characterize common visual appearance of glass surface within the boundary of transparent or mirror object region. Recently, glass surface object segmentation methods are proposed including transparent objects <cit.> <cit.> <cit.> <cit.> and strongly reflective objects such as mirror <cit.> <cit.> <cit.> <cit.>. Mei  <cit.> propose to use different dilation convolutions and thus use different sensory fields to capture contrast features and discontinuities in semantic information. However, overlapped non-glass objects lead to semantic discontinuities. Xie  <cit.> propose a new benchmark for glass object segmentation and considers the characteristics of transparent object boundary features. EBLNet <cit.> extracts boundary features employing a non-local approach. It enhances whole boundary features of glass surface and consider the relationship between the inside and the boundary of glass object region. By assigning additional attention on the boundary of glass object, they alleviate the problem of the deficiency of common visual appearance inside the boundary of transparent or mirror objects. Guan  <cit.> propose a model to exploit the semantic associations between the mirror and its surrounding objects for a reliable mirror localization. However, the method tends to be biased to the surrounding objects appeared in the training dataset that are not core features of glass objects. Xie  <cit.> employ attentional mechanisms to explore relationships among glass objects of different categories. Reflections and ghosts are employed as clues for the identification of glass surface. RCAM<cit.> guides the network to identify reflections based on the feature information obtained from the reflection removal network<cit.>. On the other hand, external boundaries have played a crucial role in salient object segmentation<cit.><cit.><cit.>, especially in medical imaging, such as CT image segmentation<cit.> <cit.> <cit.>. In CT scan or X-ray images, visual textures of human organ are not clearly separable to each other. Instead, thick skeleton or outer boundary features have been utilized for the segmentation of organ regions in CT images<cit.>. In CT scans, boundary of organ region is ambiguous containing lots of noises. In X-ray images, bones and organs are overlapped showing confused boundary connections. In such cases, organ region segmentation focuses on the boundary and texture features of external boundary, which is the order of human observation<cit.>. Our careful observations on the prior methods and diverse transparent and mirror objects bring the conclusion that neither reflections or ghosts inside glass region nor surroundings of glass object can reliably describe glass surface. As the clues (either at inside or outside of glass region) are located farther from the boundary of a glass object, observed features are less relevant to the characteristics of glass surface. Instead, we have to focus more on the inner and outer vicinity of the boundary. Outer vicinity of the boundary contains coherent appearance that decide the physical frame of a glass object. Inner vicinity strongly reveals the characteristics of glass surface based on the light refraction or distortion, while is less affected by the appearance of transmitted background objects than the middle area of glass surface. These two aspects compositely describe the existence of glass surface. Actually, this scheme is how human perceive glass surface under challenging environment. According to the insight, we define external boundary (exboundary) and internal boundary (inboundary) as respective region of outer vicinity and inner vicinity of the boundary of glass object. Our aim is to address glass object surface segmentation problem. We design deep neural networks based on the following four observations: (1) Human recognize an object from salient features first. This is also obvious with glass objects. Human tend to identify the potential area of a glass surface object from its easily distinguishable internal and external boundary regions, rather than entire glass surface region. (2) Glass surface objects show diverse internal and external boundary appearances and their relative importance. Window frame is strong clue observed from external boundary region of transparent surface. A glass bottle does not have external boundary but have strong internal boundary characteristics from light refraction. (3) Internal and external boundaries have different role is the recognition of glass surface region. Window frame observed at the external boundary of window decide potential glass surface region. Further investigation of internal boundary searching for reflections or ghosts determines the presence of window glass. Ignoring such semantic differences between internal and external boundary regions may result in incorrect segmentation of window frames without window glass or opened doors. (4) Closer region to the boundary shows stronger characteristics of glass surface. Figure <ref> shows two challenging glass objects. Glass cup shows strong internal boundary characteristics but weak external boundary. On the other hand window has strong external boundary clue (window frame). Based on our study and observations we claim that glass object segmentation networks have to collect external and internal boundary features individually. And then they have to be selectively integrated for potential glass surface region segmentation and presence determination of glass surface, respectively. To this end, we construct Internal-External Boundary Attention Module (IEBAM) with step-by-step supervision improving the feature fusion of residual networks<cit.>. IEBAM utilizes an attention mechanism to implement the human scheme of glass surface detection. It extracts salient external and internal boundary features of glass surface object such as glass frame and light distortion respectively. After that it obtains more detailed internal boundary features for glass detection. In order to handle glass surface objects with weak external boundary such as glass cups where external boundary fails to provide significant features, we propose Fused Boundary Attention Module (FBAM). FBAM combines internal and external boundaries and weighs more on internal boundary features by learning the semantic clue of transparent objects. Glass surface region is determined by finding the relationship between the reinforced boundary and glass features such as reflections occurring inside the transparent object. § METHODS Our glass surface segmentation network (Figure <ref>) is based on two newly proposed deep neural networks: Internal External Boundary Attention Module (IEBAM) and Fused Boundary Attention Module (FBAM). IEBAM separately learns internal and external boundary features, and exploits the internal boundary features to obtain the potential locations of glass surface object regions. FBAM learns glass surface object features from the previous network and fuses internal and external boundary features in proportion obtaining reinforced boundary features. By learning the relationship between features such as reinforced boundaries and reflections from glass regions, the confidence of glass regions are improved and non-glass regions are suppressed. Finally, inspired by contour loss <cit.>, we incorporate internal and external losses of spatial weight map assignment decomposition in cross-entropy loss. Our framework is inspired by EBLNet<cit.> as shown in Figure <ref>. Backbone is DeeplabV3+<cit.> and outputs multi-level features (layer 1, layer 2, layer 3, layer 4, and ASPP<cit.>). Multi-level features are fed to Internal External Boundary Attention Module (IEBAM). IEBAM outputs internal, external boundaries and body features of glass surface. Fused Boundary Attention Module (FBAM) utilizes the output of IEBAM to refine the final transparent and glass region features. Entire network calculates losses and performs training in five different aspects: boundary-wise, external boundary-wise, internal boundary-wise, body-wise (non-boundary region), and segmentation-wise. Note that our proposed method shows significant advances over EBLNet in the following points. First, EBLNet collects features from entire boundary region all together. In our method, diverse inside and outside characteristics of transparent surface are separately identified and optimally utilized resulting in cleaner segmentation results. In PGM of EBLNet, using whole boundary areas containing non-glass region misleads glass detection. §.§ Internal External Boundary Attention Module IEBAM structure is shown in Figure <ref>(a). Input of an IEBAM module is the outputs F_input, F_high and F_low generated by backbone network of each layer, thus obtaining feature maps of different receptive field of glass surface: (1) boundary feature maps, F'_b, (2) internal boundary feature maps, F'_in, (3) external boundary prediction, F'_ex, (4) ‘body’ feature maps, F'_body (segmentation features except the boundary region), (5) ‘complete’segmentation feature maps, F'_m (that characterizes both body region and internal boundary region). Among them, F'_ex, F'_in and F'_m are fed to the subsequent FBAM module. F'_ex, F'_in, F'_b and F'_m employ convolution operation to obtain F_ex, F_in, F_b and F_m, and perform loss calculations. IEBAM is designed according to the observation scheme of human with glass surface object. Most easily identifiable internal and external boundary features are extracted first, and these are used as the basis for obtaining internal and final glass surface features. Because lower layers of backbone networks contain cleaner boundary information, we concatenate F_input and the refine features F_low and obtain rough boundary features F_b by convolution operation. In order to obtain the internal boundary features F'_in accurately, we concatenate whole boundary region features and F'_low. Fully attention network focuses on the correlation between internal boundary and F_low to obtain internal boundary attention. Finally, internal boundary features are refined based on the attention with convolution operations, which is same for external boundary defined as follows. F'_in=g_3×3(f(g_3×3([F'_b;F_low)]);g_3×3([F'_b;F_low])), where [; ] and g_3×3 denote concatenation and convolution. f is fully attention network. F_input contains glass region features computed by the previous layer of the network. Different from EBLNet, where the whole boundary of the glass surface object features are subtracted from the F_input, we subtract F'_in from the F_input to reduce the disturbance from the external boundary region. F'_high contains a large amount of high receptive field information, so the glass body feature is concated with it and the final glass surface object body feature F'_body is obtained by convolution operation. This procedure is formulated as follows. F'_body=g_3×3([F'_high;(F_input-F'_in)]), Different from EBLNet which directly fuses the whole boundary features, we perform F'_m=F'_body+F'_in thus avoiding external boundary region features that do not belong to the glass region and excluded from the final merged feature F'_m calculation. Finally, F'_in, F'_ex, F'_body, F'_m are passed through the convolution operation and loss calculation separately. We compare generated feature map F'_body in the process of IEBAM with EBLNet, which are formulated as follows. F'_resiudal=F_input-F'_in, F'_resiudal=F_input-F'_b, where F'_resiudal is the residual feature map generated in the process of obtaining F'_body, and F'_residual is visualized by dimensionality reduction using principal component analysis (PCA) in Figure <ref>(b). Employing F'_in reduces noise compared to employing F'_b. §.§ Fused Boundary Attention Module FBAM structure is shown in Figure <ref>. Although internal and external boundaries of glass surface containing different features are obtained, boundary features vary over different glass surface types. Windows and mirrors usually contain strong external boundaries, while glass cups and bottles have very weak external boundaries. Therefore, we fuse internal and external boundary features based on the semantic information of glass surface objects extracted from the previous module. We are inspired in the fusion of depth image features and RGB image features <cit.>. We use sigmoid to assign weights to internal and external boundaries. Ground truth of our internal and external boundary regions share one pixel thickness of real boundary as shown in Figure <ref>(b). So we discard Relu and other nonlinear functions with outputs equal to zero. In particular, to fuse internal and external boundary features proportionally, we perform global pooling and convolution operations on the glass segmentation features obtained from previous network, and obtain boundary attention after employing sigmoid function. Internal and external boundary features are fused proportionally by the boundary attention values, and the boundary-enhancing feature map F'_en are obtained by convolution operation, which can be formulated as follows. μ(F)=g_1×1(R(N(g_1×1(G_3×3(F_input))))), G_3×3(F_input)=G(g_3×3(g_3×3(F_input))), α=1/1+exp(-μ(F'_in)) , β=1-α, F_en=g_3×3(α·F'_in+β·F'_ex), where G is global average pooling operation. R, N are ReLU and Batch Normalization (BN) activation function. Both enhanced boundary features F'_en and unique glass surface object features (ghosts, reflections) F'_m belong to the same semantic of glass object revealing different characteristics. We use two convolution operations to transfer the enhanced boundary features to the feature space of glass surface object body. Then, we use F'_en as query and utilize the fully-attention method to improve the score of glass surface object merged regions. Specially, We utilize F'_en as a fully-attention query feature, thus the relationship between features such as reflections and ghosts specific to the glsss region F'_m and F'_en are detected so that F'_m can be refined as formulated below. F'_m=γ∑_i=1^cexp( Q_en(i)·K_m(j)) /∑_i=1^cexp( Q_en(i)·K_m(j)) ·V_m(j) +F'_m, §.§ Decomposed Contour Loss The location of internal and external boundaries are ambiguous and the thickness of useful boundary region such as glass frame is not constant. Inspired by contour loss<cit.>, we apply spatial weight maps in cross entropy loss, which assigns relatively high value to emphasize pixels near the external and internal glass boundaries. External boundary spatial weight map M^C_ex is formulated as follows. M^C_ex=g^th_ex·Guasss(g^th_b)+1, g^th_b, g^th_ex are the ground truth of boundary and external boundary region. In our experiments, thickness of g^th_b is set to 9 and g^th_in, g^th_ex are 5 respectively. Internal and external boundary include real boundary of thickness 1, as shown in Figure <ref>(b). And then, Gaussian smoothing filter is applied to obtain spatial weight map of the whole boundary. Contour loss L_in is implemented as following formula. L_in=-∑_x,yM^C_in(x,y)·(Y_in(x,y)·logg^th_in(x,y) +(1-Y_in(x,y))·log(1-g^th_in(x,y))), Y_in(x,y) indicates predicted saliency map and the loss of external boundary is calculated in the same way. §.§ Network Architecture and Loss Function Proposed modules are trained end-to-end way for glass object bodies and glass surface boundaries respectively. IEBAM outputs F_in, F_ex, F_b, F_body which are the predictions of the internal boundary, external boundary, boundary and glass object body respectively. FBAM outputs F_m which is the prediction of final integrated glass surface object region. Joint loss function is as follows. L_joint=L_b(F_b,G_b)+L_in(F_in,G_in)+L_ex(F_ex,G_ex) +L_body(F_body,G_body)+L_m(F_m,G_m), G_m, G_b, G_in, G_ex and G_body indicate original boundary, internal boundary, external boundary, and ground truth of body as shown in Figure <ref>(b). L_b is a normal Dice Loss<cit.>, while L_in, L_ex adopt decomposed contour loss. L_m and L_body are standard Cross-Entropy Loss. § EXPERIMENTAL EVALUATIONS Datasets: We conduct experiments on four glass segmentation and two mirror segmentation datasets (Trans10k <cit.>, GDD<cit.>, GSD<cit.>, GSD-S<cit.> and MSD<cit.>, PMD<cit.>). GDD contains 2980 training and 936 test images. Trans10K is the largest glass object segmentation dataset with 5000 training images, 1000 validation images and 4428 test images. It has two categories: stuff and things. GSD and GSD-S are the latest transparent object datasets collected and labeled through networking and photography. GSD consists of 4102 annotated glass images, and contains close up, medium and long shots from diverse scenes. The GSD-S dataset includes semantic information about other objects in the images, however, we do not utilize it. MSD is a large mirror segmentation dataset with 4018 images (3063 training, 955 test) and PMD is the latest mirror segmentation dataset collected in outdoor and indoor. Implementation Details: Our implementation is based on PyTorch, and our model is trained and tested on a PC with an 16-core i7-9700K 3.6 GHZ CPU, and 8 NVIDIA GeForce RTX 3090 GPU cards, including Trans10k, GDD, and MSD datasets. We are not using conditional random fields (CRF) <cit.> as a post-processing. Evaluation Metrics: We use five evaluation metrics commonly used in sementic and glass surface segmentation tasks: intersection over union (IoU), pixel accuracy (ACC), weighted F-measure (&F_β) <cit.>, mean absolute error (MAE), and balance error rate (BER). Since Trans10k has two categories: stuff and things, we adopt mean intersection over union (mIou), mean balance error rate (mBer) as the evaluation metrics. F_β is a harmonic mean of average precision and average recall defined as follows. F_β=(1+β^2)(Precision×Recall)/β^2Precision+Recall, where β^2 is set to 0.3 as suggested in <cit.>. Mean absolute error (MAE) is widely used in foreground-background segmentation tasks where average pixel-wise error between predicted mask P and ground truth mask G are calculated. MAE=1/H×W∑_i=1^H∑_j=1^W| P(i,j)-G(i,j)|, where P(i, j) indicates predicted probability at location (i, j). We employ balance error rate (BER/mBER), which takes into account the imbalance region in transparent (mirror) and non-transparent (non-mirror) regions quantitatively evaluating the performance of glass surface segmentation. BER=(1-1/2(TP/N_p+TN/N_n))×100, where TP, TN, Np, and Nn represent the numbers of true positives, true negatives, glass pixels, and non-glass pixels. §.§ Experiment on Trans10k For fair comparison, hyperparameters are from <cit.>. Stochastic gradient descent (SGD) optimizer, initial learning rate of 0.01 and decay <cit.> with the power of 0.9 for 16 epochs are used. Input images are augmented by random horizontal flipping and resizing with the size of 512×512. The backbone of Trans10k is ResNet50, which uses two different output steps of 8 and 16 and pre-trained on ImageNet <cit.>, while the remaining layers of our model are randomly initialized. We utilize mIoU, mBER, mAE as segmentation metrics in ablation study, and we add ACC when comparing with other methods. However, in the ablation study using the Trans10k, we reduce the learning rate to half and total batch size to 8 for epoch=40 in order to conduct stable training to obtain more accurate test results. Comparison with State-of-the-Arts: We compare our method with semantic segmentation methods (HRNet<cit.>, BiSeNet <cit.>, PSPNet<cit.>, DeepLabV3+<cit.>, DenseASPP<cit.>, FCN<cit.>, RefineNet<cit.>) and glass object segmentation methods (Translab<cit.>, EBLNet<cit.>). As summarized in Table 1, proposed method achieves best scores in the four metrics (output stride 8 and 16 are set for fair comparison with EBLNet). Figure <ref> shows compared sample results. In first image, gateway in the middle without glass and glass window are all accurately detected by our method. Second image has both non-glass and glass bottles and our method finds only glass bottles accurately. Ablation Study 1: with/without IEBAM and FBAM With DeepLabV3+<cit.> as baseline, we add only IEBAM ("+IEBAM") and both IEBAM and FBAM ("+IEBAM +FBAM"). In Table <ref>, adding IEBAM improves mIoU by 5.13 %. It shows the importance of feature extraction scheme in glass object detection. FBAM brings additional improvement of 0.36 % in mIoU. Ablation Study 2: with/without inboundary (In) and exboundary(Ex): We conduct experiments in the FBAM network using internal F_in and external boundary features F_ex as query in order to explore the effects of internal and external boundary respectively. As shown in Table <ref>, external boundary plays more dominant role than internal boundary when recognizing transparent objects. Most glass objects have distinct external boundaries but small number of glass objects have more distinct internal boundaries. And our algorithm FBAM deals the situation properly. Ablation Study 3: Effect of different loss terms To demonstrate the phenomenon that closer pixels to boundary at internal and external boundaries are stronger features, we test with binary cross-entropy loss (L_in/L_Ex-bce), dice loss(L_in/L_Ex-dice), and decoupled contour loss (L_in/L_Ex-co). Decoupled contour loss pays more attention to the pixels near boundary region. As shown in Table <ref>, our decomposed contour loss improves performance by 0.66%, and 0.39% compared with Dice loss and binary cross-entropy loss respectively. Ablation Study 4: Varying boundary thickness: We vary the thicknesses of internal and external boundary regions with two categories(stuff and things) of objects on Trans10k. In Table <ref>, as the thickness of internal and external boundary increase, mIoU increases up to 91.0%. §.§ Experiments on GDD and MSD For fair comparison, training and testing hyperparameters follow <cit.>. For GDD, input images are augmented by randomly horizontal flipping and resizing and the input of the network is standardized to 416×416 in both training and testing sets. The parameters of multi-level feature extractor are initialized by pre-trained ResNet101. Other parameters are initialized randomly. Initial learning rate is 0.003 and stochastic gradient descent (SGD) with momentum of 0.9 is used for 200 epochs. For MSD, input images are augmented by randomly horizontal flipping and resizing and the input of the network is standardized to 384×384. The parameters of the multi-level feature extractor are initialized by pre-trained ResNet101, ResNetX101 networks and other parameters are initialized randomly. Learning rate is 0.002 and SGD with momentum of 0.9 is used for 160 epochs. We compare our method with the state-of-the-art semantic segmentation, shadow detection, saliency object detection, and glass segmentation methods such as Translab, GDNet, EBLNet and mirror segmentation methods such as LSA<cit.>, MirrorNet as shown in Figure <ref>. Test on MSD shows that our method segments mirror region reducing the interference of the reflection inside the mirror according to the external boundary. In Table <ref>, <ref>, our method shows best performance over prior work. Interestingly in Table <ref>, compared to LSA that segments semantics of objects around the mirror as auxiliary features, we achieve better IoU by 1.61%. Improvement of our method on GDD is somewhat limited. We observe that, as shown in Figure <ref>, GDD takes camera oblique and close up photos in order to capture the reflection phenomenon. Thus most of glass images does not show boundary. However, in such extreme cases, our method shows improvement as shown in Table <ref>. Ablation Study 5,6: with/without inboundary (In) and exboundary(Ex): As summarized in Table <ref>, training result using only internal boundaries in GDD is improved by 0.33% in IoU by adding external boundaries and training result using only external boundaries in GDD is improved by 0.71% in IoU by adding internal boundaries. Training result using only internal boundaries in MSD is improved by 1.33% in IoU by adding external boundaries and training result using only external boundaries in MSD is improved by 0.23% in IoU by adding internal boundaries. §.§ Experiments on GSD, GSD-S and PMD Table <ref> shows performance comparison with state-of-the-art methods on GSD and GSD-S glass object datasets. Proposed method achieves best performance on the two most important metrics in semantic segmentation tasks (IoU and F_beta). In Table 8, our method also shows the best performance in the three metrics on PMD. Especially proposed method achieves 2.14% of IoU improvement compared to the state-of-the-art mirror segmentation method. Directly capturing the frame information of transparent objects on the glass surface, rather than utilizing semantic information of other objects in the image during the training process, is more effective, especially in mirror segmentation tasks. §.§ Conclusion We propose IEBAM and FBAM for glass object segmentation. Extensive evaluations on six public datasets are conducted showing outperforming glass object segmentation performance over prior work. We observe that the external frame region of glass objects plays a critical role than any other semantic segmentation information of objects avoiding overfitting of a model to the dataset. ieee_fullname
http://arxiv.org/abs/2307.01643v1
20230704105536
Quasi-crystalline order in vibrated granular matter
[ "Andrea Plati", "Raphael Maire", "Etienne Fayen", "Francois Boulogne", "Frederic Restagno", "Frank Smallenburg", "Giuseppe Foffi" ]
cond-mat.soft
[ "cond-mat.soft", "cond-mat.stat-mech" ]
Université Paris-Saclay, CNRS, Laboratoire de Physique des Solides, 91405 Orsay, France Université Paris-Saclay, CNRS, Laboratoire de Physique des Solides, 91405 Orsay, France Université Paris-Saclay, CNRS, Laboratoire de Physique des Solides, 91405 Orsay, France Université Paris-Saclay, CNRS, Laboratoire de Physique des Solides, 91405 Orsay, France Université Paris-Saclay, CNRS, Laboratoire de Physique des Solides, 91405 Orsay, France Université Paris-Saclay, CNRS, Laboratoire de Physique des Solides, 91405 Orsay, France Université Paris-Saclay, CNRS, Laboratoire de Physique des Solides, 91405 Orsay, France Quasi-crystals are aperiodic structures that present crystallographic properties which are not compatible with that of a single unit cell. Their revolutionary discovery in a metallic alloy, less than three decades ago, has required a full reconsideration of what we defined as a crystal structure. Surprisingly, quasi-crystalline structures have been discovered also at much larger length scales in different microscopic systems for which the size of elementary building blocks ranges between the nanometric to the micrometric scale. Here, we report the first experimental observation of spontaneous quasi-crystal self-assembly at the millimetric scale. This result is obtained in a fully athermal system of macroscopic spherical grains vibrated on a substrate. Starting from a liquid-like disordered phase, the grains begin to locally arrange into three types of squared and triangular tiles that eventually align, forming 8-fold symmetric quasi-crystal that has been predicted in simulation but not yet observed experimentally in non-atomic systems. These results are not only the proof of a novel route to spontaneously assemble quasi-crystals but are of fundamental interest for the connection between equilibrium and non-equilibrium statistical physics. Quasi-crystalline order in vibrated granular matter G. Foffi August 1, 2023 =================================================== In 1982, Shechtman discovered the first alloy with a diffraction pattern for which the Bragg peaks showed a symmetry that is forbidden by crystallography in periodic solids <cit.>. This discovery was initially met with resistance: the existence of structures in which atoms can be arranged in spatial structures which lack long-range periodicity, while still preserving sufficient long-range order to generate discrete Bragg peaks clashed with the elegant picture of crystals as consisting of a repeating unit cell. Nonetheless, eventually materials with this property, which were called quasi-crystals (QCs) <cit.>, changed the way in which scientists interpret the crystal state, by disentangling the concept of order from the concept of periodicity, to the point where the very definition of crystals had to be changed to include aperiodic structures <cit.>. In the following years, quasi-crystalline structures have been observed in several artificial alloys (see Refs.  divincenzo1999quasicrystals, Ranganathan1991 for a review) and were even discovered in a natural occurring mineral, Icosahedrite <cit.>, of probable extraterrestrial origin <cit.>. More recently, quasi-crystals have been also observed at much larger length scales in a wide range of soft matter systems <cit.> revealing promising optical properties for next-generation photonic devices <cit.>. In soft quasi-crystals <cit.>, two fundamental questions arise: i) understanding up to which length scales we can observe spontaneous quasi-crystalline order and ii) identifying the key dynamical and interaction properties required for a soft-matter system to form a quasi-crystal. For the former, the self-assembly of QCs has been observed on length scales that are related to the nature of the elementary building blocks that range from macromolecular structures <cit.> and nanoparticles <cit.> to polymers aggregates made of micelles <cit.>, passing by polymers <cit.> and thin films <cit.>. To our knowledge, the largest soft-matter quasi-crystals found to self-assemble consisted of micrometer-size micelles <cit.>. The second question is of a more fundamental nature and has been explored extensively using numerical simulations. In particular, simple coarse-grained models of interacting particles make it possible to simulate systems large enough to display aperiodicity and to pin down its origin, from anisotropic repulsive <cit.> and attractive <cit.> particles to simple isotropic potentials <cit.> and hard spheres <cit.>. Interestingly, one of the simplest systems leading to quasi-crystal formation in silico is a simple two-dimensional binary mixture of non-additive hard disks undergoing equilibrium dynamics <cit.>. This proved that, in the right geometrical conditions, quasi-crystalline order can emerge from a purely entropy-driven self-assembly process. To our knowledge, quasi-crystal self-assembly in dissipative non-equilibrium systems where energy is constantly supplied from the environment and internally dissipated (such as active or granular matter) is still unexplored. A natural avenue to explore quasi-crystal formation beyond colloidal scale and thermodynamic equilibrium is to use granular matter, which has proven itself to be an ideal playground for the exploration of non-equilibrium phenomena over the last three decades. Depending on how a granular system is confined and the imposed external driving, it may exhibit either a fluid-like or a solid-like behaviour <cit.> and can undergo a variety of phase transitions <cit.>. However, spontaneous quasi-crystal formation in systems driven out of thermodynamic equilibrium has not yet been observed in either experiments or simulations. In this paper, we report the experimental and numerical observation of quasi-crystalline order in a binary mixture of millimeter-size spherical grains vibrated on a substrate. Our findings demonstrate that quasi-crystals can be formed also by macroscopic particles much beyond the scale at which thermal agitation plays a role. Our system is indeed intrinsically out of equilibrium due to dissipation arising from frictional forces and energy injection due to external driving. In the following, we first discuss granular quasi-crystal formation in numerical simulations of a coarse-grained collisional model. We then report the main result of this paper namely the experimental self-assembly of a granular quasi-crystal. In both simulations and experiments, we consider a binary mixture of spherical grains (N_S with diameter σ_S and N_L with diameter σ_L) lying on a horizontal substrate. In order to characterize the geometrical properties of such a mixture, we use the following dimensionless parameters: the size ratio q=σ_S/σ_L, the fraction of small grains x_S=N_S/(N_S+N_L) and the area fraction ϕ=(N_Sσ_S^2+N_Lσ_L^2)π/4L^2, where L is the side of the box. As shown in Fig. <ref>a, spheres on a substrate can be mapped onto disks thanks to the introduction of a non-additivity parameter δ=2√(q)(1+q)-1 and letting them interact at a distance smaller than σ_LS=1/2(σ_S+σ_L)(1+δ), note that -1<δ<0. This is the main idea underlying the effective 2D model considered in a previous numerical study focusing on elastic non-additive hard disks following dynamics at thermodynamic equilibrium <cit.>. This previous study revealed that, for sufficiently high area fractions and depending on the specific {q,x_S} combination, one can observe the self-assembly of different crystals including a 12-fold and a 8-fold symmetric quasi-crystal. These results are in agreement with the fact that for conservative dynamics, geometrical constraints and hard-core interactions can be minimal ingredients for thermodynamic stability of quasi-crystalline structures <cit.>. In this work, we focus our attention on the quasi-crystalline 8-fold symmetric phase (QC8), which was observed to self-assemble significantly more rapidly than the dodecagonal quasi-crystal <cit.>. The collisional model we use in our simulations represents the athermal/non-equilibrium extension of the one considered in <cit.>. It has been shown to embody dissipative and forcing mechanisms of experimental systems where spherical grains are placed on vertically vibrating horizontal substrates <cit.>. In these systems, energy is injected along the vertical direction through grain-substrate collisions and then transferred to the horizontal ones through grain-grain collisions with an efficiency that depends on the impact kinematics. In the model the dynamics is fully 2D: there is no vibrating plate since its effect is coarse-grained out thanks to the introduction of instantaneous grain-grain collisions which take into account both energy injection and dissipation. In this model, a binary collision between grains of mass m_i, m_j obey the following rule for the velocity (𝐯_i, 𝐯_j) update: 𝐯_i' = 𝐯_i + m_j(1+α)/m_i+m_j(𝐯_ij·σ̂_ij)σ̂_ij + 2 m_j/m_i+m_jΔσ̂_ij 𝐯_j' = 𝐯_j - m_i (1+α)/m_i+m_j(𝐯_ij·σ̂_ij)σ̂_ij - 2 m_i/m_i+m_jΔσ̂_ij , where the primed letters refer to post-collisional variables, and σ̂_ij and 𝐯_ij are respectively the unit vector joining particles i and j and the relative velocity between them. The parameter α is the coefficient of restitution that embodies, for 0≤α< 1, the dissipative nature of the collision. The last term in the Eqs. <ref>, is responsible, via the parameter Δ, for the velocity gain arising from the non-planar collisions which are coarse-grained out in the effective 2D description (see SI for a more detailed explanation). By computing the energy change in a collision <cit.>, it is possible to see that, depending on the impact kinematics, one can have conditions in which the total energy decreases or increases. In this simple granular model, the limit to the equilibrium conservative case is recovered by setting α=1 and Δ=0. A granular system cannot attain thermodynamic equilibrium but it can reach a non-equilibrium steady state thanks to a balance between injected and dissipated energy. Nevertheless, one can often identify a conservative system with the same geometrical properties and consider it as an equilibrium counterpart. In this perspective, non-additive hard disks in the conservative limit (α=1 and Δ=0) represent the equilibrium version of our granular system. Of course, given the dissipative/athermal nature of the dynamics, one cannot expect theoretical or numerical predictions for the equilibrium counterpart to hold in the granular case. However, in some specific conditions, vibrated granular materials have been shown to exhibit an equilibrium-like phenomenology, such as in the case of tracer diffusion in granular gases <cit.> or hexagonal crystal formation in monodisperse granular layers vibrated on a substrate <cit.>. The latter phenomenon is particularly relevant for our study since many aspects of the equivalent elastic hard-sphere crystallization have been observed also in the liquid-solid granular phase transition. The main challenging aspect for the observation of an equilibrium-like behaviour in our system is given by size polydispersity because it usually triggers non-equilibrium effects in vibrated granular materials. Size segregation <cit.> and violation of energy equipartition <cit.>, are two examples of that. Thus, an important question underlying the approach we propose is the following: can we tune the non-equilibrium parameters i.e. related to forcing/dissipative mechanisms of the granular system, such that it self-assembles into the quasi-crystalline structures that have been observed in the equilibrium counterpart? Results obtained through event-driven molecular dynamics (EDMD) simulations of the model described by Eqs. <ref> are reported in Fig. <ref>a. There we show the last snapshot of a simulation and the associated scattering pattern computed from the large grain positions. The final self-assembled structure exhibits no periodic order but the scattering peaks reveal an underlying 8-fold symmetry. This particular symmetry is forbidden by ordinary crystallographic order based on repeated translations of a single unit cell. Indeed, in our system, the final strcture can be decomposed into a combination of three different tiles each one appearing with different orientations. In Fig. <ref>a, we also highlight how small and large grains combine to form such tiles: we have i) small squares made of four large grains surrounding one small grain, ii) isosceles triangles made of three large grains surrounding one small grain and iii) large squares made of four large grains surrounding a square of small grains. The sides of the tiles coincide with bonds between large grain nearest neighbours and we can identify long and short bonds. The former outlines large square sides and triangle legs, the latter forms small square sides and triangle bases. In addition to the scattering pattern, another piece of evidence of the 8-fold symmetry is given by the histogram of bond orientations over the entire system (Fig. <ref>b). Here we can clearly see that short and long bonds are not uniformly distributed but are aligned along eight specific directions. These dominant directions are spontaneously selected among a continuum since PBC do not impose preferred orientations. Within a specific 8-fold set of bond directions, small and large squared tiles can only appear with two specific orientations while triangular tiles can lay along eight ones. One can then classify all the tiles in the system (see Methods) as shown in Fig. <ref>c and reconstruct the overall tiling of the plane (bottom-left side of Fig. <ref>a). The observed square-triangle tiling can cover aperiodically an infinite plane with long-range 8-fold orientational order. What we observe in our simulations is a finite portion of such an infinite quasi-crystal, the fact that tiles with the same form but with different orientations cover a similar area fraction is coherent with this picture <cit.>. Additional EDMD simulations varying the geometrical parameters {ϕ, x_S, q}, the non-equilibrium ones {α,Δ} and considering different system sizes confirmed the robustness of QC8 formation for this model (see SI). We now turn our attention to the experimental realization. Our setup consists of non-magnetic steel spherical grains confined in a quasi-2D squared container (height h and width L≫ h) which is vertically vibrated by an electrodynamic shaker following a signal z_p(t). The evolution of the system is followed by a camera that allows to detect the horizontal position of the grains (xy coordinates) as a function of time. The relevant set of explored geometrical parameters {q,x_S,ϕ} for the experiments is chosen close to those used in the simulation model. The realistic system eventually attains a non-equilibrium steady state whose dynamical properties sensitively depend on the driving force. To tune this, we performed preliminary numerical simulations of the setup implemented through the Discrete Element Method (DEM) <cit.>. Such simulations implement granular dynamics by means of accurate contact models which allow for studying in-silico prototypes of realistic setups (see SI). From this analysis and subsequent experimental tests (see SI), we found that to observe QC8 self-assembly one generally needs sufficiently strong vibrations to allow for an efficient vertical-to-horizontal energy transfer but not so strong that grains pile up on each other (which alters the effective 2D packing fraction of the system). For suitable choices of the driving parameters, our experimental system indeed spontaneously forms the 8-fold quasi-crystal. The dynamics of this process is shown in Fig. <ref>a (see also supplemental video) where we plot the occupied area fraction as a function of time for three different groups of tiles: the ones oriented according to the specific 8-fold set of directions which dominates at the end of the experiment (green), misaligned tiles (yellow) and defects (grey). These results are obtained from a seven-day-long experiment with sinusoidal vibrations and quantify the two distinct processes contributing to the QC8 self-assembly: single tile formation and tile orientational ordering. We note that both aligned and misaligned tiles increase monotonously at very short times. After that, their evolution becomes non-monotonous: aligned tiles exhibit an increase interrupted by sudden drops while misaligned tiles exhibit a decrease interrupted by sudden growths. Both of them eventually reach a final plateau. Such behaviour reveals that, once tiles are formed and locally ordered, they still need substantial rearrangements in order to form larger quasi-crystalline domains and this makes the QC formation extremely slow. From the evolution of reconstructed tilings shown in Fig. <ref>b, we observe that QC8 with the dominant 8-fold symmetry is growing from the boundaries toward the bulk. During this process, it is possible to observe the coexistence between different misaligned QC domains (t∼ t_max/2). In Fig. <ref>c, we also show the evolution of the structure factor: the initial configuration shows the typical rings of a liquid-like structure, the middle one exhibits blurred peaks originating from the coexistence of multiply oriented QC domains while the last one presents sharper peaks highlighting long-range order according to a dominant set of 8 directions. Finally, from the evolution of the bond orientation histogram (Fig. <ref>c), we can see that the most favoured set of orientations is the one with large tile sides aligned with the hard walls. Repetitions of the experiment revealed that this is a reproducible feature. Indeed, an important difference with respect to the numerical case discussed above is represented by the hard horizontal walls which break the orientational symmetry. The enforced grain alignment at the xy boundaries favours two specific sets of 8-fold symmetric directions: one with short bonds and one with long bonds aligned with walls. The dominance of the latter can be explained considering that the forming quasi-crystalline structure requires much more long bonds than short ones. An interesting picture seems to emerge: tiles of the desired shapes form very rapidly, probably as a consequence of their efficient local packing. However, global alignment requires much more collective rearrangements that appear to be rare events. It is important to point out that real-time measurements of forming quasi-crystalline structures are extremely rare in experiments. Generally, the techniques used for QC self-assembly (e.g. evaporation of nanoparticle solutions) are not compatible with the observation of the dynamics during the ordering process but only allows to analyze the final structure. The experimental study of system configurations over time is then another important novelty of macroscopic quasi-crystals since it can shed light on unexplored dynamical properties of QC self-assembly. In Fig. <ref>a, we show the spatial configuration, the related scattering pattern, and the reconstructed tiling obtained from the experimental configuration with the largest quasi-crystalline area fraction (see the red arrow in Fig. <ref>a). On the right side of the same figure, we report the histograms of bond orientation (b) and tile area fraction (c). We note that the key properties of the quasi-crystalline structure observed in EDMD simulations are found also in the experiment: we have the same 8-fold long-range orientational order, confirmed by the scattering intensity, and the same square-triangle tiling. By comparing Fig. <ref> and <ref>, we realize that the main difference between EDMD simulations and experiments is the presence of larger defects in the latter. It is difficult to pin down the origin of this difference as the z-to-xy energy transfer mechanism of the real experimental model depends, in a highly non-trivial way, on the dissipative properties of the materials and the plate roughness, while the energy injection in the EDMD simulations is fully described by a simple collision rule that depends only on two parameters. We argue that larger defects are also the reason why we observe lower large-square area fraction in the experiment with respect to EDMD (compare Fig. <ref>c and <ref>c). In fact, large squares are favoured by a large number of small grains <cit.>, but many of them are stuck in defects that appear mainly as clusters of small grains. In order to test robustness and reproducibility of the discussed results we repeated the experiment for different initial random configurations, different drivings (sawtooth vibrations) and slightly shifted state points. As reported in the SI, the emergence of quasi-crystalline order proved to be a quite robust feature of this granular system. More than 30 years ago, the quasi-crystal self-assembly paradigm emerged in purely atomistic systems but was later extended to the nanometric and micrometric scales typical of soft matter. Here, we have shown that this idea can be extended to much larger length scales in vibrofluidized granular systems. Our study reports the first observation of quasi-crystalline order in a physical system undergoing athermal dynamics in the visible scale where real-time measurements of system configuration can be performed during the self-assembly process. The passage from the micrometer to the millimeter scale is not a trivial one, since in the latter the grains do not undergo thermal agitation. This means that for granular systems, we cannot explain the emergence of quasi-crystalline order based on entropy arguments alone. Despite this fundamental difference, quasi-crystalline order emerges in the same conditions as predicted at equilibrium, something that could not be trivially expected. § ACKNOLEDGMENTS We thank Andrea Puglisi and Andrea Gnoli for their invaluable help in setting up this project and for their comments on the manuscript. We also thank Marianne Impéror-Clerc, Laura Filion, Anuradha Jagannathan and Franscesco Sciortino for carefully reading and commenting on our paper and Stéphane Cabaret for the design of the quasi-2D cell. This work has been done with the support of Investissements d'Avenir of LabEx PALM (Grant No. ANR-10-LABX-0039-PALM) and of the Agence Nationale de la Recherche (ANR), grant ANR-18-CE09-0025. apsrev4-1
http://arxiv.org/abs/2307.02556v1
20230705180018
AT2022aedm and a new class of luminous, fast-cooling transients in elliptical galaxies
[ "M. Nicholl", "S. Srivastav", "M. D. Fulton", "S. Gomez", "M. E. Huber", "S. R. Oates", "P. Ramsden", "L. Rhodes", "S. J. Smartt", "K. W. Smith", "A. Aamer", "J. P. Anderson", "F. E. Bauer", "E. Berger", "T. de Boer", "K. C. Chambers", "P. Charalampopoulos", "T. -W. Chen", "R. P. Fender", "M. Fraser", "H. Gao", "D. A. Green", "L. Galbany", "B. P. Gompertz", "M. Gromadzki", "C. P. Gutiérrez", "D. A. Howell", "C. Inserra", "P. G. Jonker", "M. Kopsacheili", "T. B. Lowe", "E. A. Magnier", "S. L. McGee", "T. Moore", "T. E. Müller-Bravo", "T. Pessi", "M. Pursiainen", "A. Rest", "E. J. Ridley", "B. J. Shappee", "X. Sheng", "G. P. Smith", "M. A. Tucker", "J. Vinkó", "R. J. Wainscoat", "P. Wiseman", "D. R. Young" ]
astro-ph.HE
[ "astro-ph.HE", "astro-ph.CO", "astro-ph.SR" ]
0000-0002-2555-3192]M. Nicholl Astrophysics Research Centre, School of Mathematics and Physics, Queens University Belfast, Belfast BT7 1NN, UKmatt.nicholl@qub.ac.uk 0000-0003-4524-6883]S. Srivastav Astrophysics Research Centre, School of Mathematics and Physics, Queens University Belfast, Belfast BT7 1NN, UK Astrophysics Research Centre, School of Mathematics and Physics, Queens University Belfast, Belfast BT7 1NN, UK 0000-0001-6395-6702]S. Gomez Space Telescope Science Institute (STScI), 3700 San Martin Drive, Baltimore, MD 21218, USA Institute for Astronomy, University of Hawaii, 2680 Woodlawn Drive, Honolulu HI 96822, USA School of Physics and Astronomy, University of Birmingham, Birmingham B15 2TT, UK Institute for Gravitational Wave Astronomy, University of Birmingham, Birmingham B15 2TT, UK 0009-0009-2627-2884]P. Ramsden School of Physics and Astronomy, University of Birmingham, Birmingham B15 2TT, UK Institute for Gravitational Wave Astronomy, University of Birmingham, Birmingham B15 2TT, UK 0000-0003-2705-4941]L. Rhodes Department of Physics, University of Oxford, Denys Wilkinson Building, Keble Road, Oxford OX1 3RH, UK 0000-0002-8229-1731]S. J. Smartt Department of Physics, University of Oxford, Denys Wilkinson Building, Keble Road, Oxford OX1 3RH, UK Astrophysics Research Centre, School of Mathematics and Physics, Queens University Belfast, Belfast BT7 1NN, UK 0000-0001-9535-3199]K. W. Smith Astrophysics Research Centre, School of Mathematics and Physics, Queens University Belfast, Belfast BT7 1NN, UK 0000-0002-9085-8187]A. Aamer School of Physics and Astronomy, University of Birmingham, Birmingham B15 2TT, UK Institute for Gravitational Wave Astronomy, University of Birmingham, Birmingham B15 2TT, UK Astrophysics Research Centre, School of Mathematics and Physics, Queens University Belfast, Belfast BT7 1NN, UK European Southern Observatory, Alonso de Córdova 3107, Casilla 19, Santiago, Chile Millennium Institute of Astrophysics MAS, Nuncio Monsenor Sotero Sanz 100, Off. 104, Providencia, Santiago, Chile 0000-0002-8686-8737]F. E. Bauer Instituto de Astrofísica, Facultad de Física, Pontificia Universidad Católica de Chile, Campus San Joaquín, Av. Vicuña Mackenna 4860, Macul Santiago, Chile, 7820436 Centro de Astroingeniería, Facultad de Física, Pontificia Universidad Católica de Chile, Campus San Joaquín, Av. Vicuña Mackenna 4860, Macul Santiago, Chile, 7820436 Millennium Institute of Astrophysics MAS, Nuncio Monsenor Sotero Sanz 100, Off. 104, Providencia, Santiago, Chile Center for Astrophysics | Harvard & Smithsonian, Cambridge, MA 02138, USA Institute for Astronomy, University of Hawaii, 2680 Woodlawn Drive, Honolulu HI 96822, USA 0000-0001-6965-7789]K. C. Chambers Institute for Astronomy, University of Hawaii, 2680 Woodlawn Drive, Honolulu HI 96822, USA 0000-0002-0326-6715]P. Charalampopoulos Department of Physics and Astronomy, University of Turku, Vesilinnantie 5, FI-20500, Finland 0000-0002-1066-6098]T.-W. Chen Technische Universität München, TUM School of Natural Sciences, Physik-Department, James-Franck-Straße 1, 85748 Garching, Germany Department of Physics, University of Oxford, Denys Wilkinson Building, Keble Road, Oxford OX1 3RH, UK School of Physics, O'Brien Centre for Science North, University College Dublin, Belfield, Dublin 4, Ireland 0000-0003-1015-5367]H. Gao Institute for Astronomy, University of Hawaii, 2680 Woodlawn Drive, Honolulu HI 96822, USA 0000-0003-3189-9998]D. A. Green Astrophysics Group, Cavendish Laboratory, 19 J. J. Thomson Avenue, Cambridge CB3 0HE 0000-0002-1296-6887]L. Galbany Institute of Space Sciences (ICE-CSIC), Campus UAB, Carrer de Can Magrans, s/n, E-08193 Barcelona, Spain Institut d'Estudis Espacials de Catalunya (IEEC), E-08034 Barcelona, Spain School of Physics and Astronomy, University of Birmingham, Birmingham B15 2TT, UK Institute for Gravitational Wave Astronomy, University of Birmingham, Birmingham B15 2TT, UK 0000-0002-1650-1518]M. Gromadzki Astronomical Observatory, University of Warsaw, Al. Ujazdowskie 4, 00-478 Warszawa, Poland 0000-0003-2375-2064]C. P. Gutiérrez Institut d'Estudis Espacials de Catalunya (IEEC), E-08034 Barcelona, Spain Institute of Space Sciences (ICE-CSIC), Campus UAB, Carrer de Can Magrans, s/n, E-08193 Barcelona, Spain Las Cumbres Observatory, 6740 Cortona Drive, Suite 102, Goleta, CA 93117-5575, USA Department of Physics, University of California, Santa Barbara, CA 93106-9530, USA 0000-0002-3968-4409]C. Inserra Cardiff Hub for Astrophysics Research and Technology, School of Physics & Astronomy, Cardiff University, Queens Buildings, The Parade, Cardiff, CF24 3AA, UK 0000-0001-5679-0695]P. G. Jonker Department of Astrophysics/IMAPP, Radboud University, PO Box 9010, 6500 GL Nijmegen, The Netherlands SRON, Netherlands Institute for Space Research, Niels Bohrweg 4, 2333 CA Leiden, the Netherlands Institut d'Estudis Espacials de Catalunya (IEEC), E-08034 Barcelona, Spain Institute of Space Sciences (ICE-CSIC), Campus UAB, Carrer de Can Magrans, s/n, E-08193 Barcelona, Spain Institute for Astronomy, University of Hawaii, 2680 Woodlawn Drive, Honolulu HI 96822, USA 0000-0002-7965-2815]E. A. Magnier Institute for Astronomy, University of Hawaii, 2680 Woodlawn Drive, Honolulu HI 96822, USA 0000-0003-3255-3139]S. L. McGee School of Physics and Astronomy, University of Birmingham, Birmingham B15 2TT, UK Institute for Gravitational Wave Astronomy, University of Birmingham, Birmingham B15 2TT, UK 0000-0001-8385-3727]T. Moore Astrophysics Research Centre, School of Mathematics and Physics, Queens University Belfast, Belfast BT7 1NN, UK 0000-0003-3939-7167]T. E. Müller-Bravo Institute of Space Sciences (ICE-CSIC), Campus UAB, Carrer de Can Magrans, s/n, E-08193 Barcelona, Spain Institut d'Estudis Espacials de Catalunya (IEEC), E-08034 Barcelona, Spain 0000-0001-6540-0767]T. Pessi Instituto de Estudios Astrofísicos, Facultad de Ingeniería y Ciencias, Universidad Diego Portales, Av. Ejército Libertador 441, Santiago, Chile 0000-0003-4663-4300]M. Pursiainen DTU Space, National Space Institute, Technical University of Denmark, Elektrovej 327, 2800 Kgs. Lyngby, Denmark 0000-0002-4410-5387]A. Rest Space Telescope Science Institute, Baltimore, MD 21218, USA Department of Physics and Astronomy, The Johns Hopkins University, Baltimore, MD 21218, USA School of Physics and Astronomy, University of Birmingham, Birmingham B15 2TT, UK Institute for Gravitational Wave Astronomy, University of Birmingham, Birmingham B15 2TT, UK Institute for Astronomy, University of Hawaii, 2680 Woodlawn Drive, Honolulu HI 96822, USA Astrophysics Research Centre, School of Mathematics and Physics, Queens University Belfast, Belfast BT7 1NN, UK 0000-0003-4494-8277]G. P. Smith School of Physics and Astronomy, University of Birmingham, Birmingham B15 2TT, UK CCAPP Fellow Department of Astronomy, The Ohio State University, 140 West 18th Avenue, Columbus, OH, USA Department of Physics, The Ohio State University, 191 West Woodruff Ave, Columbus, OH, USA Center for Cosmology and Astroparticle Physics, The Ohio State University, 191 West Woodruff Ave, Columbus, OH, USA Konkoly Observatory, CSFK, MTA Centre of Excellence, Konkoly Thege M. út 15-17, Budapest, 1121, Hungary ELTE Eötvös Loránd University, Institute of Physics and Astronomy, Pázmány Péter sétány 1/A, Budapest, 1117 Hungary Department of Experimental Physics, University of Szeged, Dóm tér 9, Szeged, 6720, Hungary Department of Astronomy, University of Texas at Ausin, 2515 Speedway Stop C1400, Austin, TX, 78712-1205, USA Institute for Astronomy, University of Hawaii, 2680 Woodlawn Drive, Honolulu HI 96822, USA 0000-0002-3073-1512]P. Wiseman School of Physics and Astronomy, University of Southampton, Southampton, SO17 1BJ, UK 0000-0002-1229-2499]D. R. Young Astrophysics Research Centre, School of Mathematics and Physics, Queens University Belfast, Belfast BT7 1NN, UK We present the discovery and extensive follow-up of a remarkable fast-evolving optical transient, AT2022aedm, detected by the Asteroid Terrestrial impact Last Alert Survey (ATLAS). AT2022aedm exhibited a rise time of 9±1 days in the ATLAS o-band, reaching a luminous peak with M_g≈-22 mag. It faded by 2 magnitudes in g-band during the next 15 days. These timescales are consistent with other rapidly evolving transients, though the luminosity is extreme. Most surprisingly, the host galaxy is a massive elliptical with negligible current star formation. X-ray and radio observations rule out a relativistic AT2018cow-like explosion. A spectrum in the first few days after explosion showed short-lived He II emission resembling young core-collapse supernovae, but obvious broad supernova features never developed; later spectra showed only a fast-cooling continuum and narrow, blue-shifted absorption lines, possibly arising in a wind with v≈2700 . We identify two further transients in the literature (Dougie in particular, as well as AT2020bot) that share similarities in their luminosities, timescales, colour evolution and largely featureless spectra, and propose that these may constitute a new class of transients: luminous fast-coolers (LFCs). All three events occurred in passive galaxies at offsets of ∼4-10 kpc from the nucleus, posing a challenge for progenitor models involving massive stars or massive black holes. The light curves and spectra appear to be consistent with shock breakout emission, though usually this mechanism is associated with core-collapse supernovae. The encounter of a star with a stellar mass black hole may provide a promising alternative explanation. § INTRODUCTION Astrophysical transients are now found in their thousands by optical time-domain surveys with wide-field robotic telescopes, such as the Asteroid Terrestrial impact Last Alert System <cit.>, Panoramic Survey Telescope and Rapid Response System <cit.>, Zwicky Transient Facility <cit.>, and All-sky Automated Search for Supernovae <cit.>. These are unearthing a variety of new phenomena, and survey power is now set to increase even further with the Rubin Observatory, the first wide-field survey on an 8m-class telescope <cit.>. Improvements in survey cadence allow us to probe populations of transients that rise and fade on timescales of days, compared to the weeks-months of typical supernovae (SNe). The majority of fast transients seem to arise from stripped massive stars <cit.>. These include the initial cooling peaks of Type IIb SNe, as well as events strongly interacting with a dense circumstellar medium (CSM) deficient in hydrogen <cit.> and sometimes helium <cit.>. A more mysterious population of fast transients has also been uncovered, with blue colours and a wide range of peak luminosities up to M<-20 mag, approaching superluminous supernovae <cit.>. Since their identification in Pan-STARRS by <cit.>, such objects have been discovered in data from the Palomar Transient Factory and the Supernova Legacy Survey <cit.>, the Dark Energy Survey <cit.>, ATLAS <cit.>, Kepler <cit.>, Hyper Suprime Cam <cit.> and ZTF <cit.>. They have been termed Fast Blue Optical Transients (FBOTs) or Rapidly Evolving Transients (RETs). Their association with star-forming galaxies suggests a connection with massive stars <cit.>, and their photometric evolution appears consistent with shock breakout from a dense, extended envelope <cit.>. Several of the best observed RETs also show persistent high temperatures and luminous X-ray and radio emission. The most famous example is AT2018cow <cit.>, but other well-studied objects include CSS161010 <cit.>, AT2018lug <cit.>, AT2020xnd <cit.>, and AT2020mrf <cit.>. These `Cow-like' events seem to be energised by continuous injection from a central engine <cit.>, though interaction with CSM can also contribute luminosity <cit.>. Tidal disruption events (TDEs) of stars by intermediate-mass black holes (BHs) have been considered as an alternative model <cit.>, though accretion onto a stellar-mass BH following the collapse of a massive star appears to be favoured by most authors <cit.>. Here we present an extraordinary new rapid transient that points to a distinct class of luminous, fast-cooling events. AT2022aedm, or ATLAS22bonw, was discovered by ATLAS on 30 Dec 2022 <cit.>. Spectroscopy carried out the following day by the Advanced Public ESO Spectroscopic Survey of Transient Objects <cit.> suggested a likely SN, though of indeterminate spectral type <cit.>. A spectroscopic host galaxy redshift z=0.14343 from the Sloan Digital Sky Survey <cit.> indicated a peak absolute magnitude M_o=-21.5 mag[We assume a flat ΛCDM cosmology with H_0=70 Mpc^-1 and Ω_Λ=0.7, giving a luminosity distance D_L=679 Mpc, and a Galactic extinction E(B-V)=0.0428 for this line-of-sight <cit.>], but the rising light curve was faster than any known SLSN. A relatively featureless spectrum, and most surprisingly an elliptical host galaxy, further added to the intrigue of this event, motivating extensive follow-up observations. § OBSERVATIONS §.§ Ground-based imaging AT2022aedm was discovered in the ATLAS transient stream processed by the ATLAS Transient Science Server <cit.>. Calibrated ATLAS data in the cyan (c) and orange (o) bands were obtained using the ATLAS forced photometry service <cit.>. ATLAS typically obtains four exposures per night in a given band; we combined each quad into a single average flux measurement to improve the signal-to-noise ratio. After AT2022aedm had faded below o∼19 mag, observations were binned over neighbouring nights to improve the signal-to-noise ratio. Follow-up photometry was obtained from Pan-STARRS in the i,z,y bands, the Las Cumbres observatory global telescope network (LCO, as part of the Global Supernova Project) in the B,g,V,r,i bands, the Liverpool Telescope (LT) in the u,g,r,i,z bands. Data from the European Southern Observatory New Technology Telescope (NTT) were obtained using both EFOSC2 for the optical g,r,i,z bands and SOFI for the near-infrared (NIR) J,H,K bands, as part of ePESSTO+. Data were reduced (de-biased and flat-fielded) either automatically by facility pipelines, or manually using the PESSTO pipeline <cit.> in the case of the NTT data. §.§ Photometry sans frustration Photometry was performed using a custom pipeline, `Photometry Sans Frustration' (or psf)[<https://github.com/mnicholl/photometry-sans-frustration>]. This is a fully python-based code, employing aperture and Point-spread function (PSF) fitting photometry routines from astropy <cit.> and photutils <cit.>. As well as deriving the image background, PSF and zeropoint, the code automatically downloads local star catalogs and reference images using astroquery <cit.>, and provides options to solve the coordinate system using astrometry.net <cit.>, clean cosmic rays using lacosmic <cit.>, align and stack images using astroalign <cit.>, and subtract transient-free reference images of the field using pyzogy <cit.>. All NTT, LCO and LT images were cleaned and stacked within each night. Calibration stars and template images were obtained in g,r,i,z from Pan-STARRS <cit.>, in u from the SDSS, and in J,H,K from the VISTA Kilo-degree Galaxy Survey <cit.>. LCO B,V reference images were obtained after the transient faded. The full photometric data set is shown in Figure <ref>. §.§ Swift observations We obtained ultraviolet (UV) and X-ray data using target of opportunity observations with the UV-Optical Telescope (UVOT; ) and X-ray Telescope (XRT; ) on-board the Neil Gehrels Swift Observatory (Swift; ). UVOT imaging was carried out in the uvw2, uvm2 and uvw1 filters. The light curves were measured using a 5” aperture. Count rates were obtained using the Swift uvotsource tools and converted to magnitudes (in the AB system) using the UVOT photometric zero points <cit.>. No host subtraction was performed in the UV bands, as host contamination is negligible at these wavelengths (this is confirmed by the later UVOT visits that result in only non-detections). We processed the XRT data using the online analysis tools provided by the UK Swift Science Data Centre <cit.>. AT2022aedm is not detected in the combined 15.3 ks exposure, with a limiting count rate <8.53×10^-4 s^-1. Assuming a power-law spectrum with Γ=2 <cit.>, and a Galactic hydrogen column density towards AT2022aedm of 4.2×10^20 cm^-2 <cit.>, this corresponds to an unabsorbed 0.3-10 keV luminosity L_X<3.8×10^42 . This upper limit is deeper than the observed X-ray luminosities ∼10^43-44 in AT2018cow <cit.>, AT2020xnd <cit.> and AT2022tsd <cit.>. §.§ Radio observations We observed AT2022aedm with the Arcminute Microkelvin Imager - Large Array <cit.> over 4 epochs, beginning 20 days after discovery. AMI–LA is an eight-dish interferometer based in Cambridge, UK. Each dish is 12.8 m in diameter enabling an angular resolution of ∼30". The facility observes at a central frequency of 15.5 GHz with a bandwidth of 5 GHz. Data taken with AMI–LA were reduced using a custom pipeline reduce_dc. We used 3C286 and J1119+0410 as the primary and secondary calibrators, respectively, to perform amplitude and phase calibration. The pipeline also flags the data for radio-frequency interference, effects of poor weather and antenna shadowing. The data were then exported in uvfits format ready for imaging. Further flagging and imaging were conducted in the Common Astronomy Software Applications (casa) <cit.> using the tasks tfcrop, rflag and clean. AT2022aedm is not detected in any of the final images, with 3σ upper limits of F_ν<[41,210,156,96] μJy at 20, 66, 107 and 110 days after the first optical detection. These correspond to limits on the spectral luminosity of L_ν<2.3×10^28-1.2×10^29 Hz^-1. For comparison, Cow-like RETs typically exhibit radio emission at the level of L_ν≳10^29 Hz^-1 (at ∼10 GHz) on timescales of months <cit.>. §.§ Spectroscopy We obtained optical spectra of AT2022aedm using EFOSC2 on the NTT (through ePESSTO+), the LCO 2-m telescopes, the SuperNova Integral Field Spectrograph <cit.> on the University of Hawaii 2.2-m telescope, and Binospec on the 6.5-m MMT <cit.>. Spectroscopy from ePESSTO+ commenced on 31 Dec 2022 (within one day after the object was flagged by ATLAS) and continued until 20 Feb 2023, by which time the spectrum was indistinguishable from a pre-explosion host galaxy spectrum from SDSS. Standard reductions of these data, including de-biasing, flat-fielding, spectral extraction, flux and wavelength calibration, were performed using instrument-specific pipelines. The reduced spectra are plotted in Figure <ref> and labelled with the instrument and phase with respect to our estimated explosion date. We assume that host galaxy extinction is negligible, supported by the lack of Na ID absorption in these spectra <cit.>, and the early blue colours in our spectra and photometry. All data will be made publicly available via WISeREP <cit.>. § ANALYSIS §.§ Light curve The rising light curve of AT2022aedm is well constrained by the early ATLAS o-band detections. The discovery point on MJD 59941.1 at o=19.51 mag is a factor ≃5 below the peak o-band flux. Fitting a second-order polynomial to the early flux light curve (Figure <ref>) indicates the explosion occurred on MJD 59940.0±0.5 (≈1 day before the first detection), reaching o-band peak on MJD 59950.6±0.5. We take these as the dates of explosion and peak throughout. We note however that the last non-detection is 15 days before detection, so a slightly earlier explosion date cannot be entirely ruled out if the early light curve shape is more complex. The rest-frame rise-time from half the peak flux is t_r,1/2=6.6 days, much shorter than the SLSNe that reach comparable peak luminosities but with t_r,1/2=10-40 days <cit.>. The fading timescale of AT2022aedm is also much quicker than most other luminous transients. The g-band light curve fades by 2 magnitudes in the 15 days after peak, and by 18 days has declined to 10% of peak g-band flux. In o-band, where we also have the rise, the full-width at half-maximum (i.e. the total time spent within 50% of peak flux) is t_1/2=19±1 days. These timescales are well within the distributions measured for RETs discovered in DES <cit.>. Although the measured t_1/2 for AT2022aedm is longer than the t_1/2<12 days defining RETS in PS1 <cit.> and ZTF <cit.>, this is attributable to using different photometric filters: we measured t_1/2 in o, where the rate of fading in AT2022aedm is ≈55% slower than in the g-band. Correcting by this factor gives an estimated g-band t_1/2 of ≈12 days. The extinction-corrected g-band peak luminosity of AT2022aedm, M_g=-22.04±0.05, makes it one of the brightest RETs discovered to date. It outshines all but one event (DES16E1bir) in the combined PS1+DES+ZTF sample. The closest spectroscopically-classified RETs in terms of luminosity are the Cow-like RETs, typically reaching ≈ -21 mag <cit.>. To highlight the exceptional luminosity of AT2022aedm, we show a combined g- and c-band rest-frame light curve in Figure <ref>, compared to representative examples of different types of fast-fading transients. AT2022aedm is broader and brighter than AT2018cow, but fades faster than the fastest SLSN, SN2018bgv <cit.>. It is much more luminous than a typical RET <cit.>, the fastest TDE, AT2020neh <cit.>, the fastest broad-lined SNe Ic, such as iPTF16asu <cit.> (see also SN2018gep and SN2018fcg; ; ), or any fast interacting transients of Types IIn <cit.>, Ibn <cit.> or Icn <cit.>. Figure <ref> also shows the g-r (or B-V) colour evolution of AT2022aedm compared to the same sample of objects (where multiple bands are available). From an initial g-r=-0.37 at 10 days after explosion, AT2022aedm dramatically reddens by 1.8 magnitudes in colour index over the next 35 rest-frame days. Cow-like RETs, TDEs, and interacting transients generally show a more gradual or flat colour evolution. The colour change in AT2022aedm is more consistent with events with expanding, cooling photospheres; it lies intermediate between iPTF16asu and SN2018bgv. PS1-10bjp shows a similar colour evolution over the first 10 days. Two unclassified fast transients show a comparable colour evolution, in combination with a peak absolute magnitude brighter than -20 mag. One is AT2020bot, the only RET in the ZTF sample that did not fit into any of the stripped-envelope, interacting or Cow-like sub-populations <cit.>. It is fainter than AT2022aedm, with a faster rise and redder average colour. A stronger similarity is exhibited by `Dougie' <cit.>, a mysterious transient discovered by ROTSE in 2009. This event peaked at -22.5 mag after a fast rise of ≈10 days. The early light curve shape is very similar to AT2022aedm, though the decline may flatten after 30-40 days. However, this flattening could also be due to a host contribution in its UVOT photometry. While the light curve could be plausibly interpreted as a super-Eddington TDE <cit.>, its position offset from the host nucleus, and lack of distinct spectroscopic features, make this classification far from certain. §.§ Bolometric light curve To measure the overall energetics of AT2022aedm, we integrate our multi-band photometry using superbol <cit.>. We construct a pseudo-bolometric light curve using the excellent g,r,i,z and o-band coverage, and estimate the full bolometric light curve by fitting blackbody functions to these bands and to our UV and NIR photometry where available. These light curves are shown in Figure <ref>. The peak pseudo-bolometric luminosity is typical of SLSNe, reaching L_griz=10^43.8 , but the light curve rise and decline rates fall well outside the SLSN distribution. The decay rate is comparable to AT2018cow, though the rise is longer than the <2 days exhibited by that event. The estimated full bolometric luminosity of AT2022aedm at peak is exceptionally high, reaching ≈10^45 . This is due to a high temperature T≳30,000 K (shown in the top right panel), suggested by the very blue g-r and r-i colours in the first LCO images. The temperature exhibits a monotonic decline with a scaling of roughly T∝ 1/t. The blackbody radius increases throughout our observations, suggestive of an expanding photosphere that remains optically thick. The estimated bolometric light curves of Dougie and AT2020bot, also constructed using superbol, peak at similar luminosities to AT2022aedm. §.§ Spectra The spectroscopic evolution of AT2022aedm is shown in Figure <ref>. The cooling observed in the photometry is also evident in its spectra. The first NTT spectrum 3.5 days after explosion shows a strong blue continuum, which weakens and disappears by day 27. By day 48, the spectrum is indistinguishable from an archival SDSS spectrum of the host galaxy. The spectra mostly lack obvious broad emission, absorption or P Cygni lines typically seen in SNe. The spectra on days 3 and 14 (both from NTT) and 20 (from MMT) with the best signal-to-noise ratios are examined in more detail in Figure <ref>. Weak broad features may exist at around 4000-5000 Å after day 20, though these could also be caused by contamination from the host galaxy, which is around 2 magnitudes brighter than AT2022aedm at this phase. We show this explicitly by adding an arbitrary 20,000 K blackbody to the host, finding that this reasonably reproduces the overall shape of the day 20 spectrum. Figure <ref> does show several narrow spectral lines of a clearly transient nature. The day 3 spectrum shows a sharply peaked emission line consistent with He II λ4686, possibly with a broader base. This line is often observed in very young SNe <cit.> and in TDEs <cit.>, due to the high radiation temperatures capable of ionising helium. We also detect a weak narrow Hα emission line. Other Balmer lines are not visible at this signal-to-noise. He II is not detected in any later spectra, likely because the temperature has fallen too low to maintain helium ionisation. The spectra on days 14 and 20 show narrow absorption, rather than emission, from hydrogen and neutral helium. The first three transitions of the Balmer series are clearly visible, but blue-shifted from their rest wavelengths by ≈2700 . The high-resolution MMT/Binospec data on day 20 also clearly show He I λ5875 absorption, blue-shifted by ≈2500 . These lines are not visible in the host spectrum, confirming their association with the transient. Figure <ref> also includes a comparison between the early spectra of AT2022aedm and other fast transients. The initial He II and Hα emission is reminiscent of some SNe Ibn <cit.>, as well as the fast-evolving TDE AT2020neh <cit.>. The earliest spectrum is also a reasonable match for young SNe IIn, including the shock-breakout candidate PT09uj <cit.>. However, AT2022aedm never develops the strong emission lines typically seen in these classes at later times. The largely featureless spectrum even at 15 days is a poor match for any fast evolving SN Ic, SLSN, or SN Icn. Cow-like RETs exhibit quite featureless spectra at peak, but the prototype AT2018cow showed increasingly clear H and He emission lines as it evolved, opposite to the case in AT2022aedm. Interestingly, the spectra of Dougie remained featureless for ≳30 days <cit.>, and seemed to cool in a manner similar to AT2022aedm. AT2020bot lacks a full spectroscopic time series, making a detailed comparison difficult. It shows an unusual spectrum at maximum light. <cit.> note the presence of possible broad features, weak compared to typical SN lines and without an obvious identification. In section <ref> we observed that these two events also shared some key photometric properties with AT2022aedm. §.§ Host galaxy The host galaxy of AT2022aedm is LEDA 1245338 (or SDSS J111927.73+030632.7). This is a bright, red galaxy with M_r=-22.8 mag; Figure <ref> illustrates this with a colour image obtained from Pan-STARRS. An SDSS spectrum is also available (shown in Figure <ref>). Both the SDSS spectral fitting and Galaxy Zoo morphological analysis classify LEDA 1245338 as an elliptical galaxy. The SDSS data release also includes automated analysis of the SDSS spectrum with the Portsmouth pipeline (using the method of ). The spectral fitting measures a total stellar mass of ≈10^11.5 , and a star-formation rate (SFR) consistent with zero. They find a mean age of the stellar population of 4.8 Gyr. Given that SED modelling is highly sensitive to the assumed functional form of the star-formation history <cit.>, we run our own analysis on the host photometry, over a wider wavelength range, using prospector <cit.>. In particular, we use the prospector-α model employing a non-parametric star-formation history, with 6 equal-mass star-forming bins of flexible width <cit.>. We include archival host photometry from SDSS, the 2 Micron All-Sky Survey <cit.>, and the Wide-field Infrared Survey Explorer <cit.>. The fit is shown in Figure <ref>. We measure M_*=10^11.45 , consistent with SDSS results, and a specific SFR in the last 50 Myr of log ( sSFR/yr^-1) = -11.69. The SDSS analyses and our prospector results confirm that the host of AT2022aedm is a massive, `red and dead' galaxy. This is surprising: recent work by <cit.> shows that less than 1% of core-collapse explosions occur in such environments. Moreover, this galaxy is especially unlike the hosts of most transients with comparable luminosity. SLSNe occur almost exclusively in low-mass galaxies with < 10^10 <cit.>, and their average specific star-formation rate is three orders of magnitude greater than our measurement for AT2022aedm. Bright RETs are also found in relatively low-mass, star-forming host galaxies: the sSFRs of the RET hosts in PS1 and DES span -10≲log( sSFR/yr^-1)≲-8, with only three (out of 73) fitting best to a passive galaxy model, and only two having a stellar mass greater than 10^11 . <cit.> conducted a systematic analysis of RET hosts in DES, finding evidence for star-formation in all of the 49 galaxies for which redshifts were available, and the five RETs from HSC were also found in star-forming galaxies <cit.>. Notably, two other fast transients from our photometric and spectroscopic comparisons also occurred in elliptical galaxies: Dougie and AT2020bot <cit.>. The Pan-STARRS images of their hosts are shown alongside AT2022aedm in Figure <ref>. We also note that one SN Ibn, PS1-12sk, exploded in a bright elliptical, prompting <cit.> to suggest that not all SNe Ibn result from massive stars. § DISCUSSION §.§ A new class of transient? AT2022aedm is a puzzling event with a very unusual set of properties: * a high peak luminosity in the optical, with M_g≈-22 mag * no luminous radio or X-ray emission * fast rise and decline rates, fading ∼ 1 mag per week in the g-band * rapid cooling from ∼30,000 K to ∼4,000 K in a few weeks following peak * a spectrum dominated by a smooth continuum with no high equivalent width absorption or emission lines at any phase * a massive host galaxy comprised of an old stellar population, with no evidence for current star formation. This combination is not consistent with any known class of transients. An extensive search of the literature reveals two other objects which share the key properties: M<-20 mag, rise time <10 days, fast decline and colour evolution, and weak spectroscopic features at all times. Together, these events indicate a new class of fast transients with high optical luminosities and fast cooling after peak. Dougie <cit.> in particular shows a strong photometric and spectroscopic similarity, while AT2022bot <cit.> may represent an even faster-evolving member of this class. All three occurred off-centre within passive host galaxies, indicating that – uniquely among RETs – these new `Luminous Fast Coolers' (LFCs) do not require young stellar populations. §.§ Rates We estimate the volumetric rate of these events using the ATLAS Survey Simulator <cit.>. Interpolated absolute light curves of AT2022aedm in c and o bands are inserted into the simulation at 10,000 random times, positions and redshifts (up to some z_ max), and compared to the cadence, footprint and depth of the true ATLAS survey to determine the number of 5σ detections of each injected event. We define a discovery as those objects detected in more than n_ det images (or ≈ n_ det/4 nights, since ATLAS typically obtains a quad of exposures per night at a given pointing). We then calculate the fraction classified of all real transients brighter in apparent magnitude than the injected transient at z=z_ max and having at least the same number of ATLAS detections, and take this as our spectroscopic completeness. The rate is then R=N_ events/(f_ discf_ spec V T), where T=2.5 yr is the duration and V the volume (within z=z_ max) of the mock survey, f_ disc and f_ spec are the fractions discovered and classified, and N_ events=1 is the number of observed AT2022aedm-like transients in ATLAS. We set z_ max=0.2 (V=1.9 Gpc^3), covering the redshift range within which LFCs have been discovered, and set n_ det=20. The latter is motivated by the modal number of detections for real ATLAS transients, and with observations on 5 nights should also enable identification of the fast light curve shape. Varying n_ det leads to less than a factor two variation in our derived rate, due to a trade-off between f_ disc and f_ spec: stricter requirements lead to a smaller discovered sample but with a higher spectroscopic completeness. For these parameters, our survey simulation returns f_ disc=0.35 and f_ spec=0.58, giving R≈1 Gpc^-3 yr^-1. We caution that this estimate applies to the brightest LFCs such as AT2022aedm and Dougie, and that fainter and faster events such as AT2020bot are likely more common volumetrically but harder to detect. Nevertheless, our derived rate indicates that these events are very rare, ∼10^-5 of the core-collapse SN rate. This rate is lower than the SLSN rate of a few ×10 Gpc^-3 yr^-1 <cit.>, but may be consistent with the rate of Cow-like events, estimated as 0.3-420 Gpc^-3 yr^-1 <cit.>. §.§ Physical scenarios for LFCs §.§.§ Tidal disruption events <cit.> favoured a TDE as the origin of Dougie. A model with a relatively low-mass BH of ∼10^5 provided a good match to the fast evolving light curve. Although the host galaxy luminosity was more consistent with a central BH mass of ∼10^7 , Dougie's location ∼ 4 kpc from the nucleus could indicate a disruption around a wandering intermediate-mass BH. This scenario has difficulties accounting for AT2022aedm. We are unable to find an acceptable fit using the TDE model in mosfit[This is an updated version of the same model used to fit Dougie; see <cit.>.] <cit.>, where models cannot reproduce the steep decline from peak or the fast colour evolution. The shallower decay of Dougie at late times may include some host contribution, causing a flattening that mimics the power-law decay of TDE models. Other circumstantial problems arise in trying to explain LFCs as TDEs. AT2022aedm and AT2020bot have even larger offsets from the nuclei of their hosts. In particular, AT2020bot shows an offset of ≳10 kpc, in the outskirts of the galaxy where the stellar density is low. This would make a TDE very unlikely (though at available imaging depths we cannot rule out a globular cluster). TDE models would also need to explain why these offset events show such a strong evolution in colour compared to TDEs in their host nuclei, and are so much brighter than other TDEs with fast evolution <cit.>. §.§.§ Nickel powering and white dwarf explosions Most SNe are heated by the decay of radioactive nickel (^56Ni) to cobalt and then iron, but this mechanism can be excluded for many of the known RETs <cit.>. The problem is that fast light curves require low ejecta masses (M_ ej), while bright peak luminosities require large nickel masses (M_ Ni). <cit.> showed that to produce a peak luminosity of ∼10^44 erg and a rise time <10 days, a nickel-powered model would need an ejecta velocity >0.1c. We can probably rule out a very relativistic explosion in the case of AT2022aedm, due to our radio non-detections, though cannot exclude a mildly relativistic expansion. More problematic is the requirement for M_ ej∼ M_ Ni∼1. This would produce a spectrum dominated by iron-group absorption, very different to the blue and largely featureless spectra of LFCs at peak. Despite the difficulties with a pure nickel-powered model, white dwarf progenitor models (i.e. variants of SNe Ia) would still be appealing to explain transients in old stellar populations. A SN Ia interacting with a dense CSM could produce a peak luminosity well in excess of typical SNe Ia without requiring M_ Ni∼ M_ ej. However, our NTT observations at 70–80 days after explosion indicate that AT2022aedm had already faded below the luminosity of a SN Ia at the same epoch, making a hidden SN Ia unlikely. Moreover, known interacting SNe Ia produce spectra with strong, broad hydrogen emission and broad metal P Cygni lines <cit.>, very unlike the spectra of LFCs. §.§.§ Magnetar birth Rapidly rotating nascent magnetars are suspected to play a role in many luminous and/or rapid transients, such as SLSNe, gamma-ray bursts and fast radio bursts. Central engines are also thought to be required in the Cow-like RETs. Powering short-timescale, luminous events like LFCs would require a combination of rapid rotation to provide a large energy reservoir, a strong magnetic field to extract this energy quickly, and a low ejecta mass <cit.>. We find that although the mosfit magnetar model <cit.> is able to adequately fit the decline phase of AT2022aedm, it struggles to simultaneously match the fast rise, even with an ejecta mass ≲ 1 . The environments of known LFCs are also a major problem for this model. SLSNe and long GRBs are thought to prefer low-metallicity, star-forming galaxies because these are favourable for rapidly-rotating core collapse. The massive, elliptical hosts of LFCs are decidedly unfavourable. Less than 1% of core-collapse SNe occur in elliptical galaxies <cit.>, and only ∼10% of core-collapses form magnetars <cit.>. It is therefore unlikely that three unusual magnetar-forming explosions would all be found in elliptical galaxies. Moreover, the blue-shifted narrow H and He lines in the later spectra of AT2022aedm may indicate a dense wind pre-explosion, which would strip angular momentum and inhibit magnetar formation. §.§.§ Shock breakout Interaction with CSM is another mechanism thought to be responsible for many luminous or unusual transients, including (SL)SNe IIn, and fast events like SNe Ibn/Icn. Models for luminous SNe IIn generally invoke a massive CSM that releases energy slowly via post-shock diffusion <cit.>. In the case of rapidly evolving transients, the more relevant models are shock breakout from an extended CSM or wind, which have been investigated by <cit.> and <cit.>. This model provided a reasonable explanation for the rapid SN IIn, PTF09uj <cit.>, and other RETs <cit.>. The early He II emission in AT2022aedm is often seen during the shock breakout phase in normal Type II SNe <cit.>. We apply the equations of <cit.>, following the prescriptions from <cit.>, to estimate the ejecta and wind masses required in AT2022aedm. We set the input parameters based on our superbol results: peak time t_ peak=8 days, total radiated energy E_ rad=1.1×10^51 erg, and breakout radius R_ bo≈2×10^15 cm. The equations are degenerate in M_ ej/E^2, where E is the kinetic energy of the explosion. We find M_ ej = 0.02(E/10^51 erg)^2 , with a wind density parameter[Defined as D_*≡ρ r^2/(5×10^16 g cm^-1), where ρ is the wind density at radius r.] D_*=0.27. For a wind velocity of 2700 , based on the blue-shifted absorption lines in the spectrum, this corresponds to a pre-explosion mass-loss rate Ṁ=0.7 yr^-1. <cit.> noted that Dougie could also be explained by a reasonable shock-powered model, with an estimated ≈ 8×10^50 erg deposited in ≈2.6 of CSM, but disfavoured this model based on the lack of shock-excited lines in the spectrum. AT2022aedm shows that emission lines can be very weak and short-lived in these events, perhaps making a CSM interpretation of Dougie more palatable. Nevertheless, the parameters we infer for AT2022aedm (if shock breakout is the dominant power source) are difficult to associate with any specific progenitor. For a standard 10^51 erg explosion, the low ejecta mass would be indicative of an ultra-stripped SN, yet the wind is H- and He-rich, and the unsustainable high mass-loss rate would require that it is lost in the years immediately before explosion. For a very energetic explosion with 10^52 erg, the implied ejecta mass is a more reasonable ∼2 . But this large energy would likely require a massive progenitor, increasing the tension with the passive host galaxies of the LFCs. §.§ Stellar mass compact mergers The old stellar populations hosting AT2022aedm and other LFCs are more compatible with a compact object origin, rather than massive stars. However, we encountered inconsistencies interpreting these objects as white dwarf explosions or TDEs from massive BHs. Fortunately, recent years have seen rapid progress in understanding the diversity of transients resulting from mergers involving neutron stars (NS) and stellar mass BHs, and we compare to such models here. Kilonovae, transients powered by the decay of heavy elements ejected from NS mergers, have been discovered in targeted follow-up of gravitational waves <cit.> and gamma-ray bursts <cit.>. However, LFCs are much brighter and bluer than any plausible kilonovae, in which low ejecta masses <0.1 and large opacities conspire to produce faint, red transients visible for only a few days in the optical. While no definitive detections exist, kilonovae from NS-BH mergers are expected to be even fainter and redder than those from binary NSs <cit.>. AT2018kzr was a so-far unique event suggested to be the merger of a NS with a white dwarf <cit.>. It was somewhat brighter and longer lived than known kilonovae, peaking at M≈-18 mag. However, it still faded much faster than LFCs (Figure <ref>) and showed broad metal absorption lines resembling SNe Ic, inconsistent with our events (Figure <ref>). <cit.> proposed that Cow-like RETs can arise from accretion-induced collapse following the merger of a carbon-oxygen white dwarf with an oxygen-neon-magnesium white dwarf. This model produces a magnetar in combination with a low ejecta mass. If one of the white dwarfs retained a surface hydrogen layer (type DA), a residual pre-collapse wind can help to explain the observed lines in the spectrum of AT2022aedm. This model may be a plausible progenitor channel for LFCs. However, it also predicts Cow-like non-thermal emission, which is ruled out in the case of AT2022aedm. This channel also has a short delay time, such that the host galaxies are likely to still be star-forming, in tension with the elliptical hosts of our objects. Finally, mergers involving a main sequence or evolved star with a stellar mass BH have also been suggested as progenitors of Cow-like events. Given the older host environments in LFCs, mergers with Wolf-Rayet stars <cit.> are probably disfavoured in this case. <cit.> presented models for tidal disruptions of main sequence stars by stellar mass BHs in dense clusters. Some of their wind-reprocessed models exhibit optical rise times and post-peak temperatures similar to LFCs. However, while bolometric luminosities can reach ∼10^44 , most of this energy is emitted in the UV and X-ray regime, and none of the models reach the peak optical magnitudes of our objects. Noting the early similarity of AT2022aedm to some TDE spectra, we encourage a broader exploration of the parameter space for stellar-mass TDEs. Another important test of TDE models (stellar or intermediate mass) will be identifying globular clusters at the positions of LFCs. The James Webb Space Telescope (JWST) can reach the peak of the globular cluster luminosity function (≈ -7.5 mag; ) in 10 ks with NIRCam for an event within 200 Mpc. At the distance of AT2022aedm, it is possible to achieve the same constraint in ≈100 ks. We note however that the angular extent of a typical globular cluster at this distance is comparable to the NIRCam pixel scale, making identification challenging. § CONCLUSIONS We have presented a detailed analysis of AT2022aedm, a very unusual, rapidly evolving transient in a massive elliptical galaxy. It has a rise time <10 days, a luminous peak M_g=-22 mag, and a fast decline of 2 mag in the subsequent 15 days. The optical colours evolve quickly to the red during the decline phase, while the spectrum remains devoid of identifiable broad features throughout. We do however detect narrow emission lines of Hα and He II during the first few days, reminiscent of shock cooling in young SNe, and narrow blue-shifted absorption lines at later times. We identify two previous transients, both with uncertain classifications, that share the key properties of bright peaks, fast declines, strong colour evolution and spectra without high-equivalent width emission or absorption lines. This suggests that AT2022aedm represents a well-observed example of a previously unrecognised class of luminous, fast-cooling events that we term LFCs. All three events occurred in passive galaxies, with offsets of ∼4-10 kpc. Their unique combination of properties poses challenges for any physical scenario. The passive environments disfavour a massive star origin, while the light curves and spectra are inconsistent with thermonuclear SNe interacting with a dense medium. Mergers of compact object binaries are unable to reproduce the peak luminosity. Tidal disruption of a star by an intermediate mass BH, as suggested for Dougie <cit.>, may struggle to produce the fast colour evolution. Instead we find that these events may be broadly consistent with an extreme shock breakout, though from an as-yet unknown progenitor. Alternatively, models of TDEs from stellar-mass BHs show promise, though current models emit too much of their luminosity in the UV and X-rays. Refinements of stellar-mass TDE models, and searches for dense local environments at the sites of LFCs, should help to confirm or rule out this scenario. We thank Anna Ho for sharing the data on AT2020bot. We also thank Joe Bright, Paul Scott and David Titterington for help with the AMI-LA observations. MN, AA and XS are supported by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 948381) and by UK Space Agency Grant No. ST/Y000692/1. PR is supported by STFC Grant 2742655. SJS acknowledges funding from STFC Grant ST/X006506/1 and ST/T000198/1. TP acknowledges the support by ANID through the Beca Doctorado Nacional 202221222222. JV is supported by NKFIH-OTKA grant K-142534 from the National Research, Development and Innovation Office, Hungary. T-W.C thanks the Max Planck Institute for Astrophysics for hosting her as a guest researcher. MP is supported by a research grant (19054) from VILLUM FONDEN. PC acknowledges support via an Academy of Finland grant (340613; P.I. R. Kotak). PW acknowledges support from the Science and Technology Facilities Council (STFC) grant ST/R000506/1. MF is supported by a Royal Society - Science Foundation Ireland University Research Fellowship. MK is partially supported by the program Unidad de Excelencia María de Maeztu CEX2020-001058-M. LG, CPG, MK and TEMB acknowledge support from Unidad de Excelencia María de Maeztu CEX2020-001058-M, from Centro Superior de Investigaciones Científicas (CSIC) under the PIE project 20215AT016, and from the Spanish Ministerio de Ciencia e Innovación (MCIN) and the Agencia Estatal de Investigación (AEI) 10.13039/501100011033 under the PID2020-115253GA-I00 HOSTFLOWS project. L.G. also acknowledges support from the European Social Fund (ESF) “Investing in your future” under the 2019 Ramón y Cajal program RYC2019-027683-I. CPG also acknowledges financial support from the Secretary of Universities and Research (Government of Catalonia) and by the Horizon 2020 Research and Innovation Programme of the European Union under the Marie Sklodowska-Curie and the Beatriu de Pinós 2021 BP 00168 programme. TEMB also acknowledges financial support from the 2021 Juan de la Cierva program FJC2021-047124-I. GPS acknowledges support from The Royal Society, the Leverhulme Trust, and the Science and Technology Facilities Council (grant numbers ST/N021702/1 and ST/S006141/1). FEB acknowledges support from ANID-Chile BASAL CATA ACE210002 and FB210003, FONDECYT Regular 1200495, and Millennium Science Initiative Program – ICN12_009. ATLAS is primarily funded through NASA grants NN12AR55G, 80NSSC18K0284, and 80NSSC18K1575. The ATLAS science products are provided by the University of Hawaii, QUB, STScI, SAAO and Millennium Institute of Astrophysics in Chile. The Pan-STARRS telescopes are supported by NASA Grants NNX12AR65G and NNX14AM74G. Based on observations collected at the European Organisation for Astronomical Research in the Southern Hemisphere, Chile, as part of ePESSTO+ (the advanced Public ESO Spectroscopic Survey for Transient Objects Survey). ePESSTO + observations were obtained under ESO programme ID 108.220C (PI:Inserra). This work makes use of data from the Las Cumbres Observatory global network of telescopes. The LCO group is supported by NSF grants AST-1911151 and AST-1911225. The Liverpool Telescope is operated on the island of La Palma by Liverpool John Moores University in the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrofisica de Canarias with financial support from the UK Science and Technology Facilities Council. We thank the staff of the Mullard Radio Astronomy Observatory for their assistance in the maintenance and operation of AMI, which is supported by the Universities of Cambridge and Oxford. We also acknowledge support from the European Research Council under grant ERC-2012-StG-307215 LODESTONE. NTT, PS1, Liverpool:2m, LCOGT, MMT, Swift, AMI Astropy <cit.>, Matplotlib <cit.>, Numpy <cit.>, SciPy <cit.>, Astroquery <cit.>, Astrometry.net <cit.>, Astroalign <cit.>, Lacosmic <cit.>, Pyzogy <cit.>, Photutils <cit.>, Superbol <cit.>, Mosfit <cit.>, Aladin <cit.> aasjournal
http://arxiv.org/abs/2307.01108v1
20230703153525
Perfect-Prismatic F-Crystals and p-adic Shtukas in Families
[ "Anton Güthge" ]
math.AG
[ "math.AG" ]
gobble Asymptotic tails of massive gravitons in light of pulsar timing array observations A. Zhidenko August 1, 2023 ================================================================================== arabic We show an equivalence between the two categories in the title, thus establishing a link between Frobenius-linear objects of formal (schematic) and analytic (adic) nature. We will do this for arbitrary p-complete rings, arbitrary affine flat group schemes and without making use of the Frobenius structure. § INTRODUCTION Let's start with a brief reflection on the objects involved. p-adic shtukas were first defined in SW20 over perfectoid spaces: Let (B,B^+) be a perfectoid Huber pair in characteristic p. A _n-shtuka over (B,B^+) with leg in some untilt (B^♯, B^+♯) consists of: * A vector bundle on (B,B^+)=(B,B^+)×̇(_p). * An isomorphism Φϕ^*|_(B,B^+)-V(ξ)→|_(B,B^+)-V(ξ), where ϕ→ is the Frobenius isomorphism induced from the Witt vector Frobenius W(B^+)→ W(B^+) and ξ∈(B^+♯)=W(B^+) is a distinguished element cutting out (B^♯,B^+♯) from (B,B^+). Furthermore, we ask Φ to be meromorphic around ξ. If (B^♯,B^+♯) also lives in characteristic p (i.e., it is equal to (B,B^+)) and one base changes to (B,B^+), one arrives at a vector bundle on the Fargue-Fontaine curve. Going in another direction, the notion of shtukas as defined above gets globalized in PR from perfectoid spaces to arbitrary (not necessarily analytic) adic spaces, arriving at so-called shtukas in families, essentially by v-descent from a cover by a perfectoid space. On the other side of the equivalence, perfect-prismatic[A hyphen is used as these are to be understood as F-crystals on the perfect prismatic side – not F-crystals on the prismatic side which happen to be perfect in some sense.] F-crystals are a variant of the prismatic F-crystals from e.g. BS21b: Instead of the (absolute) prismatic site over some p-complete ring A^+, one uses the (absolute) perfect prismatic site over A^+, which admits an equivalent definition as the site of all integral perfectoid A^+-algebras. If A^+ is integral perfectoid itself, the category of perfect-prismatic F-crystals is, just like the category of usual prismatic F-crystals, equivalent to the category of Breuil-Kisin-Fargue modules, an integral mixed-characteristic version of F-isocrystals: Let A^+ be an integral perfectoid ring. A BKF module on A^+ is given by: * A finite projective (A^+)=W(A^+♭)-module . * An isomorphism Φϕ^*[1/d]→[1/d], where ϕ(A^+)→(A^+) is the Witt vector Frobenius and d∈(A^+) is a distinguished element such that (A^+)/d=A^+. So in particular, prismatic F-crystals and perfect-prismatic F-crystals agree on integral perfectoids. Where they differ is in the fact that the category of perfect-prismatic F-crystals on any p-complete ring is already completely determined by what happens on integral perfectoid rings via p-complete arc-descent. Thus, all the prismatic terminology introduced in BS21 is not strictly necessary to define perfect-prismatic F-crystals: They can be completely described using the slightly more classical notions of integral perfectoid rings, BKF-modules and the p-complete arc topology. The strong similarity between the two definitions above should be apparant even to someone reading them for the first time. Indeed, a number of equivalences between various Frobenius-linear objects of adic and schematic nature has been proven over the years, for example: * A bijection on isomorphism classes between isocrystals and vector bundles on the Fargue-Fontaine curve in FF18, the latter of which has been shown to have a (simpler) adic description later. * Building on that, an equivalence between shtukas on (C,C^∘) and BKF modules on C^∘ in SW20 for C an algebraically closed non-archimedian field.. On the other hand, shtukas and BKF modules are objects living over fairly different kinds of spaces, as perfectoid spaces are analytic and integral perfectoids are formal in nature. This is also visible in the two equivalences above as these pass from objects on a (formal) scheme to objects on some version of analytic locus of the associated adic space. This shrinking of the space gives rise to numerous nuisances: * Restrictions on the input space, e.g. that C be an algebraically closed field or that C^+=C^∘, * Making it necessary to use the Frobenius structure in an essential way, i.e., one doesn't get an equivalence of the categories of vector bundles without such Frobenius structure, * The equivalences are usually not exact; if they were, the main theorem of Ans18 would have been two lines instead of many pages. Perhaps more crucially, it would hold for any affine flat group scheme and not just reductive groups. A way to get around this problem is to carry a formal scheme to the analytic world not by shrinking it, but by considering its associated v-sheaf on the category of perfectoid spaces in characteristic p. * In Ans22, this technique is used to establish an equivalence between isocrystals on some algebraically closed field k in characteristic p and vector bundles on a “family of Fargue-Fontaine curves" over (k). * In PR, Pappas-Rapoport aim to generalize this both to arbitrary schemes in characteristic p [Theorem 2.3.5]PR and fields in mixed characteristic [Proposition 2.3.8]PR. * In the brand new GI23, Gleason and Ivanov obtain a classicality result for rational bundles (i.e., excluding p=0) and, for Shtukas with a leg in p=0, also for integral ones. We want to generalize 4. while eliminating the three nuisances mentionend above: Our equivalence will work for arbitrary integral perfectoids and even (trivially by descent) for arbitrary p-complete rings, it produces an exact tensor equivalence, so in particular, we get the relevant statement for 𝒢-bundles for any affine flat group scheme for free, and without making use of the Frobenius structure at all: (<Ref>) Let R^+ be a complete topological ring carrying the Π-adic topology for some Π∈ R^+ dividing p and let /_p be any affine flat group scheme. There is a canonical equivalence of groupoids --bundles on R^+ ≅integral --bundles on (R^+). The -bundles can be understood as “perfect-prismatic F-crystals without Frobenius structure", while integral -bundles are “shtukas in families without Frobenius structure". We will from this point talk only about the objects including Frobenius structure (e.g. Shtukas in families instead of integral -bundles) for the rest of this introduction as their terminology should be more familiar to most readers, but all statements actually work in the above generality. With Frobenius structure, the theorem reads as (<Ref>) With R^+ and as above there is a canonical equivalence of groupoids perfect-prismatic -crystals on R^+ ≅-Shtukas with fixed leg on (R^+). Note the curious phenomenon that while the left side does not depend on the topology on R^+ (i.e., the choice of Π), one would think that the right side does. I do not have an explanation for this, as I only really understand this equivalence for Π a non-zero divisor.Let's now turn to our proof strategy: While both Ans22 and [Theorem 2.3.5]PR deal only with the (discrete) characteristic p case, our proof will be much closer in spirit to [Proposition 2.3.8]PR, reducing everything to rings that are Π-complete for a non-zero divisor Π. More precisely, we will show the following: (<Ref>) Let S^+ be any p-complete ring. Then there exists a map S^+→ A^+ which is simultaneously a p-complete arc-cover and a v-cover of the associated v-sheaves with A^+ a product of valuation rings with algebraically closed fields of fractions, ϖ-complete for a non-zero divisor ϖ∈ A^+. As perfect-prismatic F-crystals descend along p-complete arc covers and shtukas descend along v-covers of v-sheaves, this allows us to reduce our statment to rings of the above form. The precise assumptions on A^+ imply that A^+ is an integral perfectoid and (A,A^+)(A^+[1/ϖ],A^+) is a perfectoid Huber pair, and in fact of a very simple kind, a so called product of points. This is where we are able to build the bridge between the adic and the schematic world.The easy direction of the equivalence is to produce a Shtuka in families from a perfect-prismatic F-crystal via base change. In particular, we get a Shtuka on (A,A^+) which comes with a vector bundle living over (A,A^+)=((A^+))-V([ϖ]) from a BKF module on A^+ which lives over ((A^+)). To construct the other direction, we proceed in two steps: First, we extend our Shtuka to (A,A^+)=((A^+) - V(p,[ϖ]). Usually this is done using the Frobenius structure and some extra assumptions on (A,A^+), but instead we will get this by using the fact that we actually have a Shtuka in families (i.e., on (B,B^+) for any map to a perfectoid Huber pair(A^+,A^+)→ (B,B^+)) and some Sen-theory magic, heavily inspired by PR. Secondly, using that A^+ is a product of points, we get the extension from to all of ((A^+)) (or, equivalently, to ((A^+))) by a result of Gleason. §.§ Structure of the paper The first two chapters serve to establish the basics for both the schematic and the adic side. All proofs are straightforward, but we introduce two new definitions: _^-bundles in <Ref> and -bundles in <Ref>, which encode the underlying vector bundle data (i.e., without the Frobenius structure) of perfect-prismatic F-crystals and shtukas in families, respectively. Notably, the -bundles sit somewhere between the well-established notions of vector bundles and v-bundles on an adic space, but the exact relationship between those three notions is surprisingly subtle and remains something to watch out for throughout the paper.All the interesting proofs are in the third section. The first three subsections tackle the extension from to and all the Sen-theory magic mentioned above: The first one contains <Ref> and <Ref>, comparing descent data on adic spaces to certain Galois cocycles. This result will be combined with methods of Sen in the second subsection to prove the general descent result <Ref> about when effectivity of a certain pro-étale descent datum can be tested both generically and pointwise. Both of these results should be interesting in their own right. <Ref> then gets applied in the third subsection to show a classicality result on (A,A^+) which can be seen as a rational version of our main result, at least for those integral perfectoids arising as ring of integral elements of an affinoid perfectoid. The fourth subsection then deals with what happens in p=[ϖ]=0, or in other words, the extension from (A,A^+) to () and contains the main theorem of the paper. Finally, the last subsection contains a rather straightforward result about comparing the Frobenius linear structures on either side. §.§ Notation and conventions We will use the notion of integral perfectoid rings from BMS19. See [Lemma 3.20]BMS19 for a comparison with perfectoid spaces. We make the convention that we call an element ϖ in an integral perfectoid R^+ a pseudo-uniformizer if R^+ is ϖ-adically complete and ϖ^p|p, in line with the situation for perfectoid Huber pairs. Note that in general ϖ is not a pseudo-uniformizer in any literal sense, e.g. in a perfect ring of characteristic p it can always be chosen as 0.We furthermore make use of the whole formalism of adic and perfectoid spaces from SW20, always assumed to be complete and live over (_p,_p). In particular, for any perfectoid Huber pair (A,A^+) with pseudouniformizer ϖ, we will make use of the following adic space: (A^+)×̇(_p) =((A^+)) (A^+)×̇(_p) =(A,A^+)=()-V(p) (A,A^+)×̇(_p) =(A,A^+)=()-V([ϖ]) (A,A^+)×̇(_p) =(A,A^+)=()-V(p[ϖ]) (A,A^+)=(A^+)-V(p,[ϖ]). Each of these carries a Frobenius endomorphism, which we will always call ϕ, while ϕ-linear maps will be named Φ. As above, we will use the convention (A)(A,A^∘) if and only if A=A^∘ or A=_p. §.§ Acknowledgments First and Foremost, I want to thank my Ph.D. advisor Torsten Wedhorn who has taught and mentored me since my first contact with algebraic geometry 5 years ago. He encouraged me to study prismatic F-crystals even before there was an established notion of such things in the literature, and in this project in particular he found and helped to resolve many small and big mathematical errors and misconceptions. I also want to thank Christopher Lang for continual mathematical and technical assistance and Ian Gleason for many helpful conversations.This project was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) TRR 326 Geometry and Arithmetic of Uniformized Structures, project number 444845124. § THE SCHEMATIC SIDE §.§ The perfect prismatic site and the Π-complete arc topology The following variant of the arc topology was already considered in BS21 (for Π=p) and in Ito (on integral perfectoid rings). (The Π-complete arc site) Let R^+ be a Π-complete ring for Π∈ R^+ some element that divides p (think ϖ^p in an integral perfectoid) and let be a class of Π-complete R^+-algebras, stable under Π-complete tensor products[ will always be either the category of all Π-complete rings or of Π-complete integral perfectoids.]. Then the Π-complete arc topology on has covers given by maps A^+→ B^+ which satisfy the arc lifting property with respect to Π-complete rank one valuation rings V, i.e.: For every map A^+→ V we must be able to find a commutative diagram B^+ [r] W A^+ [u] [r] V [u, hook] with V→ W a faithfully flat map of Π-complete rank one valuation rings (equivalently, a injective local homomorphism). We have made an implicit claim in the footnote which we still need to verify: Let S^+ be an integral perfectoid with a pseudouniformizer ϖ∈ S^+. Then the category of ϖ-complete integral perfectoid S^+-algebras is indeed closed under ϖ-complete tensor products. It is enough to show this for ϖ=p. Indeed, if A← B→ C is a diagram of integral perfectoid S^+-algebras, and we know their p-completed tensor product to be an integral perfectoid again, then the same must be true for its ϖ-completed one. So consider such a diagram. Then one has (A⊗̂^L_B C)_=A_⊗̂^L_B_ C_=A⊗̂_B C, where the first identity is [Proposition 8.12]BS21 and the second one is [Corollary 8.13]BS21. Now the left term is coconnective by [Lemma 8.4]BS21 and the right side is connective since it is the tensor product of connective animated rings. So the whole thing is discrete and thus, by applying [Corollary 8.13]BS21 again, a discrete perfectoid ring. We use the language of prisms of BS21. Recall that a prism is called perfect if its Frobenius endomorphism is bijective. Perfect prisms are in equivalence to integral perfectoid rings by (A,I)↦ A/I. The underlying category of the perfect-prismatic site over some p-complete ring R^+ is the restriction of the (absolute) prismatic site over R^+ to the sub-site containing only perfect prisms. Equivalently, it is the opposite of the category of integral perfectoid R^+-algebras.We can equip it with two different topologies: * The flat topology, obtained by restriction of the topology on the usual prismatic site; Here, covers are given by maps (A,I)→ (B,J) such that A→ B is (p,I)-complete faithfully flat. * We can also consider the much finer p-complete arc topology, obtained by restriction of the relevant topology on the category of all p-complete rings to the category of integral perfectoids, i.e., covers are given by maps (A,I)→ (B,J) such that A/I→ B/J is a p-complete arc cover. We will use (R^+)_^ to refer to the underlying category and will always explicitly mention which topology we put on it whenever it is relevant. This is similar to the situation for the usual prismatic site, which is also defined as having the flat topology, but when working with finite locally free sheaves on it, the more natural choice of topology is the quasi-syntomic topology, as used e.g. in <cit.>. Again, we made a claim in the definition which we should prove. In the context of <Ref>, the p-complete arc topology is indeed finer than the flat topology. Let (A,I)→ (B,J) be a (p,I)-completely faithfully flat map. Then certainly A/I→ B/J is p-completely faithfully flat. Let A/I→B̃ be a faithfully-flat model (i.e. a faithfully-flat map whose p-completion is our map A/I→ B/J). Then it is an arc cover as any faithfully flat map is, see e.g. Lemma 20.3.3 in KedlayasWebsite. Thus we can lift any map to a rank one valuation ring, and after p-adically completing B̃, this is still possible with maps to p-complete rank one valuation rings. Let R^+ be a p-complete ring. By restricting their analogs from the non-perfect prismatic site, we get the following sheaves for the flat topology on (R^+)_^: * The reduced structure sheaf _^(A,I)↦ A/I. * The (full) structure sheaf _^(A,I)↦ A. * The canonical ideal sheaf _^(A,I)→ I. All of these should also be sheaves for the p-complete arc topology, but we will not need this explicitly – it will however follow for and [1/] from the proof of <Ref>. §.§ Locally free sheaves on the perfect-prismatic site The following result will be used at multiple points: Ito Let R^+ be an integral perfectoid ring with pseudouniformizer ϖ∈ R^+. Sending an integral perfectoid ϖ-complete R^+-algebra S^+ to the category of finite-projective (S^+)-modules satisfies ϖ-complete arc descent. We say that a presheaf of _^-modules on (R^+)_^ satisfies the crystal condition if for every map of perfect prisms (A,I)→ (B,J) over R^+ the natural map (A,I)⊗_A B→(B,J) is an isomorphism. With this, one could define quasi-coherent prismatic crystals (as they do in AB21), but we will not do this. Let R^+ be a p-complete ring. An ^_-bundle over R^+ is a presheaf of -modules on (R^+)_^ subject to one of the following equivalent conditions: * is a sheaf for the topology τ and finite locally free in the topology τ', for any choice (τ,τ')∈{(,),(,),(,)}. * sends any prism (A,I) to a finite-projective A-module and satisfies the crystal condition. First, every rule as in (2) is an arc -sheaf by <Ref>, and flat-locally free as it is Zariski-locally free on A, and a completed Zariski-localization of a perfect prism is again a perfect prism (see [Remark 2.16]BS21 for localization of delta rings) and the completed localization map will certainly be a (p,I)-complete flat cover.Now every arc sheaf is a flat sheaf and every sheaf that is flat-locally free is also arc-locally free, so we get the other two variants in (1) from the one discussed above. Finally, we have to show that a τ-sheaf that is τ=τ'-locally free only admits values in finite projective modules and satisfies the crystal condition. The former is a direct consequence of <Ref> (or its variant for the flat topology, following from <Ref>), the latter is a standard argument: Let (A,I)→ (B,J) be a map in (R^+)_^ and let (A,I)→( A', I') be a τ-cover trivializing . Write (A”,I”) =( A', I')⊗_(A,I)( A', I') ( B', J') =( A', I')⊗_(A,I)(B,J) ( B”,J”) =(A”,I”)⊗_(A,I)(B,J). Then is also free on all those. We now have a commutative diagram 0 [r] (A,I)⊗_A B [d] [r] ( A',I')⊗_A' B' [d] [r] (A”,I”)⊗_A” B” [d] 0 [r] (B,J) [r] ( B', J') [r] (B”,J”). The rows are exact by the sheaf condition (here it becomes relevant that τ=τ'), and the two vertical arrows on the right are isomorphisms because we already know to be free on those prisms and because tensor products commute with finite products. The result follows by the five lemma. For R^+ an integral perfectoid ring, there are mutually inverse exact tensor equivalences between -bundles on R^+ and finite projective (R^+)-modules given by the obvious constructions, namely: evaluating the -bundle at ((R^+),(d)) resp. sending an (R^+)-module M to (A,I)↦ M⊗_(R^+) A. The equivalence of categories follows immediately from characterization 2. in <Ref>. Compatibility with tensor products is clear. For exactness, we need to show that a map between _-modules is surjective (i.e., surjective (p,I)-completely flat locally) if and only if it is surjective on global sections. This follows from <Ref> and the following Lemma. Let R^+→ S^+ be a ϖ-complete arc cover between ϖ-complete integral perfectoid rings for some ϖ∈ R^+ dividing p. Then a sequence of finite projective (R^+)-modules 0→ M_1→ M_2→ M_3→ 0 is exact if and only if its base change to (S^+) is. We want to apply [Lemma 2.3.6]PR, which implies the statement of the theorem under the condition that (1) the map of rings (R^+)→(S^+) is injective and (2) that the induced map of schemes ((S^+))→((R^+)) is surjective on closed points. I claim that for both (1) and (2) it is enough to show the relevant statement for the map of rings R^+→ S^+. Indeed, for (1) one obtains the map (R^+)→(S^+) by applying first the tilt functor and then the Witt vector functor, both of which preserve injections. For (2), note that as (R^+) is d-complete for a distinguished element d∈(R^+) such that (R^+)/d=R^+, all closed points of ((R^+)) actually lie in the closed subset V(d)=(R^+).Now to prove (2), note that by ϖ-completeness of R^+, all closed points of R^+ must even lie in (R^+/(ϖ)). But as R^+→ S^+ is a ϖ-complete arc-cover, R^+/(ϖ)→ S^+/(ϖ) must be an arc-cover in the usual sense, so the associated map of schemes is surjective on topological spaces. For (1), let a∈ R^+ be a non-zero element. Then, as R^+ is ϖ-complete, there exists a map to a ϖ-complete valuation ring R^+→ V mapping a to a non-zero element. Applying the arc lifting property to this shows that a must also be non-zero in S^+. For the final descent result of this chapter, we need a result by Scholze-Česnavicius, a more precise version of which we will also prove later in <Ref> CS21[in loc. cit., they only prove the p-complete version, but the Π-complete one can be done with the same proof. In any case, our finer version in <Ref> will also produce a Π-complete arc cover.] Let R^+ be a Π-complete ring for some Π∈ R^+ dividing p. Then R^+ has a Π-complete arc cover R^+→ A^+ where A^+ is a product of Π-complete valuation rings with algebraically closed field of fractions. In particular, A^+ is integral perfectoid. Let R^+ be a Π-complete ring for some Π∈ R^+ dividing p. Sending a Π-complete R^+-algebra S^+ to the category of -bundles on S^+ satisfies Π-complete arc descent. This is compatible with the exact tensor structure, i.e. for a Π-complete arc cover S^+→S'^+, the equivalence of categories between -modules on S^+ and those on S'^+ together with a descend datum can be upgraded to an exact tensor equivalence. It suffices to show this on the basis of the Π-complete arc topology specified in <Ref>. In particular, we can assume that S^+ is perfectoid, so by <Ref> we just have to show descent for mapping S^+ to finite projective (S^+)-modules. Also, by absolute integral closedness, there is some pseudouniformizer ϖ∈ S^+ such that ϖ^p=Π. But then the ϖ-adic topology is the same as the Π-adic one, and Π-complete arc covers are the same as ϖ-complete ones, so we can apply <Ref> to conclude.Compatibility with tensor products is clear. For compatibility with exactness, we need to show that a sequence of (S^+)-modules is exact if and only if it is exact after pullback along a ϖ-complete arc cover S^+→S^+, which we have shown in <Ref>. §.§ Adding a group structure Recall the Tannaka formalism over discrete valuation rings: Let V be a discrete valuation ring and let be any affine flat group scheme over V. Then for any V-scheme X, there is an equivalence of categories between -torsors on X and exact tensor functors from the category of finite rank representations of to the category of vector bundles on X. We will use this statement essentially as the definition of -bundle. Let be any affine flat group scheme over _p and let R^+ be a p-complete ring. A --bundle over R^+ is given by the following equivalent definitions: * an exact tensor functor from the category of finite rank representations of to the category of -bundles over R^+. * A rule sending each (R^+→ S^+)∋ (R_^) to a -bundle on (S^+), compatible with pullbacks. Indeed, the two definitions are equivalent as (R)_^ carries the (p,I)-complete flat topology and exactness can be checked (p,I)-complete flat-locally by <Ref> and <Ref>. Since our equivalences from the previous sections were exact, they immediately transfer to --bundles. In particular: * If R^+ is an integral perfectoid ring, the category of --bundles on R^+ is equivalent to the category of -torsors on its associated perfect prism ((R^+),d). * Let R^+ be a Π-complete ring for some Π∈ R^+ dividing p. The functor sending a Π-complete R^+-algebra S^+ to the groupoid of --bundles on S^+ satisfies Π-complete arc-descent. §.§ Adding a Frobenius structure: perfect-prismatic F-crystals Recall that each prism comes equipped with a Frobenius endomorphism ϕ, which assemble to a Frobenius isomorphism of sheaves of rings →, also denoted ϕ. We now define perfect-prismatic F-crystals, and immediately add a -structure as well. Let R^+ be a p-complete ring and →(_p) an affine flat group scheme. * A perfect-prismatic -crystal on R^+ is a -_-bundle together with a Frobenius-linear isomorphism Φϕ^*[1/][1/]. * A perfect-prismatic F-crystal is a perfect-prismatic _n-crystal for some n. The comparison with BKF modules now follows immediately from <Ref>: Let R^+ be an integral perfectoid with associated perfect prism (A,(d)). Then the equivalence of categories from <Ref> induces an equivalence of categories between * perfect-prismatic F-crystals on R^+ * BKF modules on R^+, i.e. finite projective (R^+)-modules M together with an isomorphism Φϕ^*M[1/d]→ M[1/d]. Basically, a perfect prismatic -crystal has three components going into it: An object of (R)_^, the structure and the Frobenius structure. By the exactness properties we already showed, we can choose in whichever order we apply them, leading to ♯ S_3=6 slightly different variations of the definition. Let's write down the most important one explicitly: Perfect-prismatic -crystals on R^+ are in equivalence to exact tensor functors from the category of finite-dimensional -representations to the category consisting of rules sending an (R^+→ S^+)∈ (R^+)_^ to a finite projective (S^+)-module M together with an isomorphism ϕ^*M[1/d]→ M[1/d]. This allows us to reduce all questions about perfect-prismatic -crystals to perfect-prismatic -crystals plus the appropriate exactness property. Let R^+ be a Π-complete ring for some Π∈ R^+ dividing p and let be an affine flat group scheme over _p. Sending a Π-complete R^+-algebra S^+ to the category of prismatic -crystals on S^+ satisfies Π-complete arc descent As in <Ref>, it suffices to check the descent on the basis of the topology given by R^+-algebras S^+ which are products of Π-complete valuation rings with algebraically closed field of fractions. Also, using <Ref>, it is enough to show the result for =_n and compatibility with exactness. Descent of the underlying finite projective (S^+)-modules was the point of <Ref>, so it suffices to show that we can also descend a ϕ-linear isomorphism Φϕ^*[1/d]→[1/d]. This can be done exactly as in <Ref>: Namely, we first multiply Φ by a power of d to obtain a morphism between the -bundles and ϕ^*, which we can descend using <Ref>, and divide by the same power of d again. We then use that the map on global sections associated to an arc cover is injective to show that the descended morphism is still an isomorphism after inverting d.The statement for the exact structure is clear from the relevant statement for -crystals. § THE ADIC SIDE §.§ Generalities on the v-site We now start dipping into the adic world, namely v-sheaves on the category of perfectoid Huber pairs in characteristic p and certain locally free sheaves on such things. (letters for v-sheaves) Just for this section, we make the following notational convention: We will denote a v-sheaf that comes with a structure map to _p with a normal letter with a diamond in the lower index, like X_♢. It may or may not (but in our applications always will) be given as X^♢ for some adic space X. In particular, whenever we have a map from a representable (B,B^+)→ X_♢, we get a canonical untilt (B^♯,B^+♯) supplied by the composition (B,B^+)→ X_♢→(_p). In contrast, v-sheaves without a structure map to (_p) will be denoted by cursive letters such as . (covers of v-sheaves) A map '→ of v-sheaves is called a v-cover or simply cover if it is surjective in the sheaf-theoretic sense. Explicitly, for any map from a representable (i.e. an affinoid perfectoid in characteristic p) (B,B^+)→, one has to be able to find a commutative diagram (B',B'^+) [d] [r] ' [d] (B,B^+) [r] with (B',B'^+) also representable and the left map being a v-cover of affinoid perfectoids. Recall that a v-sheaf is called small if it admits a cover by a perfectoid space, which is really just a set-theoretic bound and holds for most v-sheaves arising in pracice. For v-sheaves arising from certain p-complete rings one can even cover them by a single representable, which will be useful later: Consider A^+ a complete topological ring carrying the Π-adic topology for some Π∈ A^+ dividing p. Then (A^+) is quasi-compact (in the sheaf theoretic sense). In particular, there is a v-cover by a representable (B,B^+)→(A^+). It is enough to find a cover of (A^+) by a quasi-compact v-sheaf. We know from [Chapter 15]etCohDiamonds that the v-sheaves associated to affinoid analytic adic spaces are spatial diamonds, so in particular quasi-compact in the sheaf-theoretic sense, so it suffices to find a cover by a finite number of these.Consider the A^+-algebra Ã^+=A^+[[t]], equipped with the (Π,t)-adic topology. Then (Ã^+) is not an analytic adic space, but X(Ã^+) - V(t,Π) is[This is completely analogous to () and .] and one quickly convinces oneself that X^♢→(A^+) still satisfies the lifting criterion of a v-cover: Indeed, if (A^+,A^+)→ (B^♯,B^+♯) is a map to a perfectoid Huber pair that we want to lift, we first produce a map Ã^+→ B^+♯ by sending t to the pseudouniformizer of B^♯. This will still be continuous, and furthermore the image of t in (B^♯,B^+♯) vanishes nowhere, so the map (Ã^+)→(B^♯,B^+♯) factors through X.Now X can be covered by the two affinoids U_1 X(Π/t), U_2 X(t/Π) and the composition (U_1⊔ U_2)^♢→ X^♢→(A^+) is our desired cover. Let be a v-sheaf. We get certain sheaves of rings on the v-site _/ of characteristic p affinoid perfectoids over : * The (tilted) structure sheaf ^♭_/ ((B,B^+)→)↦ B * The -sheaf ^_/ ((B,B^+)→)↦Γ((B,B^+),_(B,B^+)), See <Ref> though. If comes equipped with a map to (_p), so we would call it X_♢ in line with <Ref>, we furthermore have: * The untilted structure sheaf ^♯_/X_♢ ((B,B^+)→ X_♢)↦ B^♯. * The canoncial (untilted) ideal sheaf ^♯_/X_♢ mapping some (B,B^+)→ X_♢ to the global sections of the ideal sheaf generated by ξ inside (B,B^+). Note the strong similarity with the sheaves on the perfect-prismatic site from the first chapter. Also note the slight annoyance that (B,B^+) is not affinoid, so we cannot literally define the -bundles below as finite locally free ^_/-modules. In fact, we only defined ^_/ for underlining the conceptual similarity with the prismatic site. §.§ Locally free sheaves on the v-site: v-bundles and -bundles The following notion seems to be well-known, e.g. a special case is studied in Heu21: Let X_♢ be a small v-sheaf over (_p) (in cases of interest to us, this will always be given by X^♢ for some adic space X). Recall v-descent of vector bundles on perfectoid spaces from [Lemma 17.1.8]SW20. It implies that the following sets of data are equivalent, either of these will be called a v-bundle on X_♢; The proofs of the equivalence are very similar to <Ref> and shall not be repeated here. * A finite locally free ^♯_/X_♢-module, * A rule assigning to any (B,B^+)∈ and (B,B^+)-valued point α(B,B^+)→ X_♢ with associated untilt (B^♯,B^+♯) (supplied by the map to (_p)) a vector bundle on (B^♯,B^+♯) which we will call α^*, or, if α is clear from the context, |_(B^♯,B^+♯). This rule has to be compatible with pullbacks, more precisely: For any map f(A,A^+)→ (B,B^+) in , which also gives us a map f^♯ (A^♯,A^+♯)→ (B^♯,B^+♯), we want f^♯*(α^*)=(α∘ f)^*. Here f^♯* just denotes the usual pullback of vector bundles along maps of adic spaces. * A 2-truncated v-hypercover over X”'^♢ → → → X”^♢⇉ X'^♢→ X_♢ by v-sheaves associated to not necessarily in characteristic p perfectoid spaces (note that we can always find this by smallness of X_♢), with the maps assumed to be compatible with the structure maps to (_p), together with a descent datum of vector bundles on X”' → → → X”⇉ X'. The second definition makes it clear that we can pull back v-bundles along maps of v-sheaves over (_p) Even more relevant to us will be a slightly different notion: Let be any small v-sheaf and set =_p or =_p. Recall the v-descent of vector bundles on open subspaces of , [Proposition 19.5.3]SW20. As before, this implies that the following two sets of data are equivalent, either of which we will call an an integral (if =_p) resp. a rational (if =_p) -bundle: * A rule associating to any map ∋(B,B^+)→ a vector bundle on (B,B^+)×̇()= (B,B^+) if =_p (B,B^+) if =_p, compatibly with pullbacks along maps (A,A^+)→ (B,B^+) in _/ in the same way as for v-bundles. * A 2-truncated hyper-v-cover X”' → → → X”⇉ X'→ by perfectoid spaces in characteristic p with a descent datum of finite projective modules on X”'×̇() → → → X”×̇()⇉ X'×̇(). Again, the first definition gives us a pullback functor from rational/integral -bundles along maps of v-sheaves →'. I would have loved to call integral -bundles “-bundles” and rational -bundles “-bundles" instead. This would however cause a dangerous clash of notation, as e.g. a “-bundle on (A^+)" for A^+ some ring of integers of a perfectoid Huber pair (A,A^+) with pseudouniformizer ϖ could be reasonably understood as a vector bundle on (A,A^+). On the other hand, an integral -bundle in our sense consists of a vector bundle on (B,B^+) for any map to a perfectoid Huber pair (A^+,A^+)→ (B,B^+). In contrast to the former, the latter very much includes information of what happens in [ϖ]=0, for example in the evaluation on (B,B^+)(A^+/(ϖ)[[t]]^∧ p[t^-1],A^+/(ϖ)[[t]]^∧ p). <Ref> (2) allows us to view the category of integral/rational -bundles on some small v-sheaf as a full subcategory of the category of v-bundles on ×(). In GI23, Gleason and Ivanov consider the same objects, but under slightly different names: In loc.cit., what we call v-bundles is called families of untilted vector bundles with the category denoted _v^^♯(X_♢). Similarly, our integral (resp. rational) -bundles on some is denoted ^_() (resp. ^_Y()), with standing for classical. The functors * associating to a small v-sheaf X_♢ over (_p) the category of v-bundles on X_♢, * associating to a small v-sheaf the category of integral/rational -bundles on satisfy descent along covers of small v-sheaves (assumed to be compatible with the maps to (_p) in the first case). By the already cited descent results [17.1.8]SW20 and [19.5.3]SW20 it is a sheaf on affinoid perfectoids, which form a basis for the v-topology: Indeed, by definition every small v-sheaf has a cover by a perfectoid space in characterisitic p, which in turn clearly has a cover by affinoid perfectoids. In cases where this makes sense, there are base change functors from vector bundles to v-bundles, from -bundles to v-bundles and from vector bundles bundles to -bundles; The constructions work in somewhat different generalities, so listing all three is not redundant. * (vector bundles to v-bundles) Let X be any adic space over _p, and consider its associated v-sheaf X^♢. We get a functor from vector bundles on X to v-bundles on X^♢ as follows: Let be a vector bundle on X. The (B,B^+)-valued points of X^♢ are by definition given by an untilt (B^♯,B^+♯) together with a map (B^♯,B^+♯)→ X. By definition of the map X^♢→(_p), this untilt agrees with the one given by the compositon (B,B^+)→ X^♢→(_p). Pullback of along (B^♯,B^+♯)→ X defines a rule for a v-sheaf, and compatibility with pullbacks is obvious by the definition. * (-bundles to v-bundles) Now consider any small v-sheaf . In this context, there is a functor from integral (resp. rational) -bundles on to v-bundles on Y_♢=×() for =_p (resp. =_p) (with the map to (_p) given by projection to the second component): Let be such a -bundle. We need to assign to every map from a representable ∋ T→ Y_♢ a vector bundle on T^♯. To do this, choose a cover X'→ by a perfectoid space in characteristic p. This gives us a vector bundle _X' on X'×̇() with descent information with respect to Y_♢. Consider the fibre product T×_Y_♢(X'×()) and cover this in turn with a v-sheaf T' representable by a perfectoid space, which we are able to do by smallness. In total, we have constructed the following diagram of v-sheaves over (_p): T' [d, two heads] [ld, two heads] T [d] T×_Y_♢(X'×()) [d] [l, two heads] Y_♢=×() X'×() [l, two heads] The composition of the two right vertical maps gives us a map T'→ (X'×()) of v-sheaves over (_p), corresponding to a map of adic spaces T'^♯→ X'×̇(). The crucial reason why we get to do this untilt is that T' is perfectoid. Pulling back _X' along this map gives us an actual vector bundle _T'^♯ on T'^♯. In a similar way, we can pull back the entire descent datum to a descent datum of _ T'^♯ over T^♯, allowing us, via v-descent of vector bundles over perfectoids, to descend _T'^♯ down to T^♯. * (vector bundle to -bundles) Now keep the assumption from (2) and assume furthermore that Y^♢=Y_♢=×() for a suitable adic space Y – this is the case notably if is either a perfectoid Huber pair (B,B^+) in characteristic p or (A^+,A^+) for an integral perfectoid A^+, equipped with the ϖ-adic topology for a suitable pseudouniformizer ϖ[Note the following subtlety for =_p: If A^+ is perfect of characteristic p with the discrete topology, then Y(A^+)×̇(_p)=(W(A^+)[1/p],W(A^+)); meanwhile, if A^+ carries the ϖ-adic topology for some pseudouniformizer ϖ which is a non-zero divisor, so that (A^+[1/ϖ],A^+) is a perfectoid Huber pair, then Y=(A^+[1/ϖ],A^+); i.e., the inversion of p can happen both in a rational or trancendental way, and which one of these happens might depend completely on the choice of ϖ!]. In this context we can associate to any vector bundle on Y an integral/rational -bundle on , very similarly to (1): Let (B,B^+)→ be any map from a representable. Then we can simply associate to this map the pullback of along (B,B^+)×̇()→ Y obtained by untilting the map (B,B^+)×()→×(). * (integral to rational -bundles) For any v-stack , there is also a functor from integral to rational -bundles in the obvious way. In cases where it makes sense, these constructions commute. These functors enjoy the following properties: * Let X be a sousperfectoid adic space. Then the functor from vector bundles on X to v-bundles on X^♢ is fully faithful, and an equivalence of categories if X is perfectoid. * For any small v-sheaf , the functor from integral (resp. rational) -bundles on to v-bundles on ×(_p) (resp. ×(_p)) is fully faithful. * If X is a perfectoid adic space, then the functor from vector bundles on (X) (resp. (X)) to integral (resp. rational) -bundles on X is an equivalence of categories. There are also versions of the third statement for X=(A^+) the adic spectrum of an integral perfectoid ring: For rational -bundles (under extra assumptions), this is <Ref>, while for integral -bundles, this is <Ref>, our main result. 3. is clear for X=(B,B^+) affinoid perfectoid because the v-site over (B,B^+) has (B,B^+) itself as a final object and our -bundles are assumed to be compatible with pullback. The statement for general perfectoid spaces X follows from analytic glueing. 1. The equivalence for perfectoid X can be done the same way as the proof of 3., i.e., it is trivial for X affinoid perfectoid and the general version follows by glueing. Regarding the full-faithfulness for sousperfectoid spaces, recall the notion of v-completeness from [Definition 9.6]HK20: A Huber pair (A,A^+) is called v-complete if the natrual map (A,A^+)→ (Ǎ,Ǎ^+)(H^0((A,A^+)_v,),H^0((A,A^+)_v,^+) is an isomorphism. In [Lemma 11.4 (a)]HK20, it is shown that any sousperfectoid ring is v-complete. Let _1,_2 be two vector bundles on X. Since the question is analytic-local on X, we can assume X=(A,A^+) is affinoid (and still sousperfectoid, thus v-complete[The reason we need sousperfectoidness instead of just v-completeness for the theorem is that as far as I know v-completeness is not known to be preserved by passing to open subspaces, see Remark 9.9 in loc. cit.]) and the _i are free. Now a morphism of vector bundles _1→_2 is simply given by a matrix with coefficients in Γ(X,´_X), while a morphism of v-bundles _1→_2 is given by a matrix with coefficients in H^0(X_v,) by description 1. in <Ref>. Fully-faithfullness now follows by v-completeness of X. 2. We first show the result in case =(B,B^+) is representable. Then the functor from vector bundles to -bundles is an equivalence by 3. and both (B,B^+)×̇(_p)=(B,B^+) and (B,B^+)×̇(_p)=(B,B^+) are sousperfectoid, so the functor from vector bundles to v-bundles is fully-faithful by 1. This formally implies full-faithfulness of the functor from -bundles to v-bundles.Now let be a general small v-sheaf, let _1,_2 be two integral/rational -bundles on , choose as _p or _p accordingly, and let _i^v be the associated v-bundles on ×(). Let α_1→_2 be a morphism of -bundles and α^v_1^v→_2^v the assoicated map of v-bundles. Now for any map from an affinoid perfectoid (B,B^+)→, pullback induces a map of v-bundles α^v|_(B,B^+)×()_1^v|_(B,B^+)×()→_2^v|_(B,B^+)×(), which, by the case of affinoid perfectoid just proved, corresponds to a unique map of -bundles α|_(B,B^+)_1|_(B,B^+)→_2|_(B,B^+). Letting (B,B^+) vary, the α|_(B,B^+) are exactly the data needed to define a morphism of -bundles, so we see that α is uniquely determined by α_v. This shows faithfulness. For fullness, by the case affinoid perfectoid discussed above, it suffices to show that a morphism of v-bundles _1^v→_2^v is uniquley determined by its pullbacks to (B,B^+)×() for variying representables (B,B^+). This follows from compatibility with pullbacks and the following straightforward Lemma. Let be a v-sheaf, let (B,B^+) be a not necessarily characteristic p affinoid perfectoid and let (B,B^+)^♢→×() be a map of v-sheaves, compatible with the structure maps to (_p). Then there exists a factorization (B,B^+)^♢→(B^♭,B^+♭)×()→×(), compatible with the structure maps to (_p) (which in the middle, just as on the right, is given by projection to the second component). Disregarding the structure maps to (_p) for just a second, a map (B,B^+)→×() is a (B^♭, B^+♭)-valued point x∈(B^♭, B^+♭) together with an untilt (B^♭♯, B^+♭♯). Now compatibility with the structure map to (_p) just means that (B^♭♯, B^+♭♯) agrees with (B,B^+). This makes it clear that taking (B,B^+)→(B^♭,B^+♭)×() the natural embedding and (B^♭,B^+♭)×()→×() the map induced from the (B^♭, B^+♭)-valued point x defines a factorization as desired. §.§ Adding a group structure [One could of course also define -v-bundles, but this will not be necessary.] For an affine flat group scheme over _p and a v-sheaf , an integral --bundle on is one of the following equivalent pieces of data: * Associating to any representable (B,B^+)→ a -bundle on (B,B^+)=(B,B^+)×̇(_p). * An exact tensor functor from finite dimensional representations of to integral -bundles on . * The equivalence also holds for rational -bundles with the same proof. * I would think that for the reduction step to _n-torsors in the very beginning of [Proposition 19.5.3]SW20, one needs the statement of my proposition (at least for =(B,B^+) perfectoid, to which we will reduce in the beginning of the proof). So maybe the statement is obvious or well-known in a way I'm not aware of. Both pieces of data are functors associating to a finite-rank representation of and a representable (B,B^+)→ a vector bundle on (B,B^+). To show that the conditions imposed on either side are the same, we need to show that the notions of exactness on both sides align. More precisely, we need to show that if a sequence of integral -bundles on some perfectoid (B,B^+) is exact (i.e., it is exact after passage to a v-cover (A,A^+)→(B,B^+) and then analytically localizing on (A,A^+)) then already the associated map of vector bundles on (B,B^+) is exact (i.e. surjective after just localizing analytically on (B,B^+)). So consider a short exact sequence 0→_1→_2→_3→ 0 of -bundles on (B,B^+) and let (A,A^+)→(B,B^+) be a v-cover (which can be always chosen to be affinoid as (B,B^+) is quasi-compact as a v-sheaf) such that the evaluation of our SES at (A,A^+) is (analytic-locally) exact on (A,A^+). I claim that the underlying map of topological spaces |(A,A^+)|→ |(B,B^+)| is surjective. Indeed, as both adic spaces are analytic, we have |(A,A^+)|=|(A,A^+)^♢|=|(A,A^+)×(_p)|, and similarly for (B,B^+). But as |(A,A^+)|→|(B,B^+)| is surjective by assumption, the same must be true after taking the product with (_p) on either side, showing the claim.But both (A,A^+) and (B,B^+) are reduced, so surjectivity on the level of topological spaces at once implies that exactness on the latter can be checked after pullack to the former. §.§ Adding a Frobenius structure: Shtukas Let still be an affine flat group scheme over _p. (Scholze-Weinstein Shtukas) By a -Shtuka over some affinoid perfectoid (R,R^+) of characteristic p we mean the data of * An untilt of (R^♯,R^+♯) over _p (so our Shtukas will always be in the integral sense), * A torsor over (R,R^+)=W(R^+)-V(ϖ), * An isomorphism of torsors Φϕ^*|_-(R^♯,R^+♯)→|_-(R^♯,R^+♯) that is meromorphic with respect to a distinguished element ξ cutting out (R^♯,R^+♯)⊂(R,R^+)×̇(_p). A Shtuka with fixed leg over some not necessarily characteristic p perfectoid Huber pair (B,B^+) is a Shtuka over (B^♭,B^+♭) where the untilt (B^♭♯,B^+♭♯) that comes as part of the datum is given by (B,B^+). Let X_♢ be a v-sheaf together with a map to (_p). A (family of) _n-Shtuka(s) over X_♢ is given by * for any map from a representable (B,B^+)→ X_♢ with associated untilt (B^♯,B^+♯), a _n-Shtuka with fixed leg on (B^♯,B^+♯), compatibly with pullbacks. Similarly, -Shtukas with fixed leg over X_♢ is given by one of the following equivalent pieces of data: * For any map from a representable (B,B^+)→ X_♢ with associated untilt (B^♯,B^+♯), a -Shtuka with fixed leg on (B^♯,B^+♯), compatibly with pullbacks. * An exact tensor equivalence from the category of -representations to the category of _n-Shtukas on X_♢ for varying n. Similarly to the prismatic -crystals discussed earlier, we have three “components" going into a (family of) mixed characteristic Shtuka; Just that because (B,B^+) is not affinoid, we have to add the Frobenius structure before we pass to families of perfectoids, leaving us with 3 slightly varying definitions, as opposed to 6 on the schematic side (the first point of the definition above kind of contains 2 different variants as we can decide if we first add the F- or -structure). -Shtukas with fixed leg on X^♢ for X an adic space have a slightly simpler description: They are given by a rule associating to any map (B,B^+)→ X from a (not necessarily characteristic p) affinoid perfectoid a -Shtuka with fixed leg on (B,B^+). Associating to a small v-sheaf X_♢ over (_p) the -Shtukas with fixed legs on X_♢ satisfies descent along v-covers that are compatible with the structure maps to (_p). It is enough to show this on the basis given by affinoid perfectoids (not necessarily in characteristic p). Without fixing the legs, this is just the well-known statement that Shtukas on affinoid-perfectoids satisfy v-descent shown in SW20. For obtaining the statement about fixed legs, we use that the covers are compatible with the maps to (_p) by assumption and that a map of affinoid perfectoids is a v-cover if and only if the associated map between diamonds is a cover of v-sheaves. § EQUIVALENCE OF THE TWO CATEGORIES §.§ Galois descent of vector bundles on adic spaces In algebraic geometry, descent data along certain finite étale covers can be reformulated using Galois representation, and in fact this can be generalized to infinite Galois extensions. We want to study an analogue of this for adic spaces, i.e., with completions at various points. We will do this in somewhat more generality than necessary for our purposes, as it should be of interest for its own sake. Throughout this subsection, fix: * K any non archimedian field with ring of integers _K and (pseudo)uniformizer ϖ, * B a reduced Tate Banach K-algebra[I don't think it is necessary to assume Tateness, but it has helped with some topological considerations.]. Note that the image of ϖ in B which we will also denote ϖ is automatically a pseudouniformizer as both K and B are Tate. * L an infinite Galois extension of K and L̂ its completion, * B_L=B⊗_K L and B̂_L=B⊗̂_K L̂, * Γ=(L/K), * a non-commutative Banach Tate B-algebra, finitely presented as a B-module, and its base changes _L⊗_B B_L and _L⊗̂_B B̂_L=⊗_B B̂_L[Whenever we tensor with a finitely presented module, no completion is necessary: In fact, if R is a complete topological ring and M is a finitely presented module over R with the canonical topology defined below, M is automatically complete. This is clearly true for finite free modules M and follows for finitely presented ones as a finitely generated submodule of such a free module can be shown to be closed.]. The model case and the only one relevant for the sequel is =_B(P) for a finite projective B-module P, so that _L=_B_L(P⊗_B B_L) and _L=_B̂_L(P⊗_B B̂_L). Let's start with some remarks on topology: As any finiteley generated module over a topological ring, carries a canonical topology induced from a quotient map of B-modules B^k↠. Using the linear nature of the topology on B, we can give this a more algebraic description: Let _0⊂ be a finitely generated B_0-submodule, where B_0⊂ B is a ring of definition, such that _0⊗_B_0 B=. Then a neighbourhood basis of 0 in is given by ϖ^n _0⊂. Choose a surjection B^k_0↠_0. As tensoring is right exact, this gives rise to a commutative diagram of B_0-modules with both horizontal maps surjective: B_0^k [r, "p_0", two heads] [d, hook] _0 [d, hook] B^k [r, "p", two heads] Now by definition of the topology on , a subset 0∈ U⊆ is a neighbourhood of 0 if and only if its preimage p^-1(U)⊆ B^k is, which, as B_0⊂ B is open, is the case if and only if p^-1(U)∩ B_0^k is a neighbourhood of 0 in B_0^k. But a neighbourhood basis in B_0^k is given by the ϖ^n B_0^k, so in total U⊆ is a neighbourhood of zero if and only if ϖ^n B_0^k⊆ p^-1(U) for some n∈, which, by B_0-linearity and surjectivity of the upper arrow is the case if and only if ϖ^n_0⊂ U. Going in another direction, B' B⊗_K K' for a finite subextension K⊆ K'⊂ L as well as B_L and B̂_L carry canonical topologies: As B, K and K' are Tate, the same is true for their tensor product B'=B⊗_K K', in fact with pseudouniformizer given by the image of the pseudouniformizer ϖ∈ K, and similarly for the other two cases. Now, putting this together with the construction from before, we get canonical topologies on both _L and _L, and the statement of <Ref> also holds in this context, so that the topology is obtained from the ϖ-adic topology on certain open submodules _L,0=_0⊗_B_0 B_L,0 and _L,0=_0⊗_B_0B̂_L,0. Finally, we also get a topology on tensor powers like B_L⊗_B B_L (as all components are Tate) and their modules like ⊗_B B_L ⊗_B B_L, with topology given by the ϖ-adic topology on _0⊗_B_0 B_L,0⊗_B_0 B_L,0 etc. The most important aspect of this construction is the following: Both B_L⊂B̂_L and _L ⊂_L are dense, and similarly for their tensor powers. This is all that needed to be said about the topologies involved; let's turn to Galois actions now: We get the following multiplicative actions of groups: * For any finite subextension K⊆ K'⊂ L, B' B⊗_K K', there is a continuous action of (K'/K) on _K'⊗_B B'=(⊗_B B) ⊗_K K' given by (K'/K)× ((⊗_B B) ⊗_K K')∋(σ,γ⊗ c)↦γ⊗σ(c). * By essentially the same construction, Γ=(L/K) acts continuously on _L. * By continuously extending this, we also get an action of Γ on _L We write each of these as (σ, a)↦σ(a). The important aspect of this construction is that for σ∈Γ σ^*_K'→_K', a↦σ(a) is an isomorphism of non-commutative B'-algebras. In this way, the action constructed above allows us to substitute “morphisms of non-commutative B'-algebras σ^* _K'→_K'" for “Endmorphisms of _K'", by composing with the isomorphism from above, and doing so without loss of information. The same is of course also true for the actions on _L and _L. We now have everything needed to define the central objects of interest for this subsection. All tensor powers and completed tensor powers are taken over B. Consider the following non-commutative B-algebras: (1a) For a finite subextension K⊆ K'⊂ L, set B'=B⊗_K K' and let C_K' be the algebra of maps of sets (K'/K)→⊗_B B', with addition and multiplication defined pointwise. (1b) D_K'⊗_B B'^⊗ 2 with K' and B' as before. (2a) C_L consisting of continuous maps Γ=(L/K)→⊗_B B_L=_L, where Γ carries the standard (profinite) topology and _L the discrete topology (not the canonical one!). (2b) D_L⊗_B B_L^⊗ 2. (3a) Ĉ_L consisting of continuous maps of sets Γ=(L/K)→⊗_B B̂_L, where now the right side carries the canonical topology. (3b) D̂_L⊗_B B̂_L^⊗̂2. These have the following extra structures: * A canonical topology, in the case of the C's via the compact open topology, or, equivalently, as the codomain is metrizable and the domain is compact, with the topology induced from the supremum metric. [Note that in the definition of C_L as a topological B-algebra, the B_L-module _L appears both with the discrete (for the underlying B-algebra) and the canonical (for the topological structure) topology.] * Subgroups C_K'^⊂ C_K', D_K'^⊂ D_K' etc. of the groups of units in the respective non-commutative rings C_K', D_K', etc. given by those elements that satisfy a certain cocycle condition: * For D_K', this is meant as follows: For β∈ D_K'=⊗_B B'^⊗ 2 and i≠ j∈{1,2,3}, let p_ij^*β∈⊗_B B'^⊗ 3 be the element inserting a 1 in the k'th copy of B', with k the unique element in {1,2,3}-{i,j}. Then β lies in D_K'^ if and only if it is invertible in D_K' and p_23^*α· p_12^*α = p_13^*α. If =_B(P) for a finite projective B-module P, the elements of D_K'^ are those β∈ D_K'^× which, when viewed as an isomorphism of B'^⊗ 2-modules (P⊗_B B') ⊗_B B' ≅ P⊗_B B'^⊗ 2 P⊗_B B'^⊗ 2≅ B'⊗_B (P⊗_B B'), satisfy the cocycle condition on B'^⊗3 in the usual sense of descent theory. * Similarly for D_L and D̂_L with B'^⊗ 3 replaced by B_L^⊗ 3 and B̂_L^⊗̂3. * For the C's, a map ασ↦α(σ) is said to satisfy the cocycle condition if it takes values in units and it is a 1-cocycle in the cohomological sense, i.e. if α(στ)=σ(α(τ))α(σ). Here α(σ) is specified by the function α(K'/K)→_K', while σ(c) for any element c∈_K', e.g. for c=α(σ), comes from the action defined in <Ref>. * The underlying sets of the subgroups just defined are themselves the objects of certain groupoids: * For two elements β,β'∈ D_K', define _D_K'^(β,β'){d∈_K'^×| p_1^*(d)·β=β'· p_2^*(d)}, where p_i^*⊗_B B'→⊗_B B'^⊗ 2 is the obvious map. If again =_B(P), then the d∈(_B(P)⊗_B B')^×=_B'(P⊗_B B') satisfying the above condition are exactly the isomorphisms of descent data. * Similarly for D_L and D̂_L. * For two elements α,α'∈ C_K'^ define _C_K'^(α,α'){c∈_K'^×|α(σ)=c^-1α'(σ) σ(c) for all σ∈(K'/K)}. Thus, α and α' are isomorphic if and only if they are cohomologous as cocycles. * Similarly for C_L and Ĉ_L. This brings us to the main theorem of this subsection: There are canonical isomorphisms (1) [-15pt] C_K'≅D_K', (2) C_L≅D_L, (3) Ĉ_L≅D̂_L, compatible with all the extra structures from above: That is, they are isomorphisms of non-commutative topological B-algebras, respecting the subsets given by elements satisfying the cocycle condition on either side and in fact they induce isomorphisms of groupoids on them. * In the language of group cohomology, one has C_K' =C^1((K'/K), _K') C^_K' =Z^1((K'/K), _K'^×) C^_K'/≅ = H^1((K'/K),_K'^×) C_L =C^1,(Γ, _L^) C_L̂ =C^1,(Γ, _L^), and similarly for the for the four remaining structures. * If once again =_B(P), so we can view the elements of the D's as descent data, then all descent data in D_K'^ and D_L^ are effective: for D_K' this is just usual étale descent, while it follows for D_L since every β∈ D_L is already in the image of some D_K'. On the other hand, for an element β∈D̂_L^ corresponding to some α∈Ĉ_L^, the following are equivalent: * The descent datum β is effective. * α is cohomologous to a cocycle vanishing on an open normal subgroup of Γ. * α is cohomologous to a cocycle lying in the image of C_L→Ĉ_L. Indeed, (b)⇒(a) follows again from usual étale descent, (c)⇒(b) follows as the elements in C_L automatically vanish on an open normal subgroup by continuity. Finally, we get (a)⇒(c) because the functor from finite-projective B-modules to descent data as in D̂_L factors over the category of descent data as in D_L. (1) is mostly classical, see e.g. [0CDQ]Stacks. We reproduce most of it for the sake of clarity. The normal basis theorem produces an isomorphism of K-algebras K'⊗_K K' ≅∏_σ∈(K'/K) K', a⊗ b ↦ (aσ(b))_σ∈(K'/K). As tensoring commutes with finite products, this induces an isomorphism of non-commutative B-algebras D_K' =⊗_B B'^⊗ 2≅∏_σ∈(K'/K)⊗_B B' =C_K'. In fact, it is even an isomorphism of topological non-commutative B-algebras. Indeed, let B'_0=B_0⊗__K_K' for _K' the integral closure of _K in K'. Then B'_0 is a ring of definition of B', stable under the action of (K'/K), and thus the isomorphism above restricts to an isomorphism of non-commutative B_0-algebras C_K',0_0 ⊗_B_0 B_0'^⊗ 2≅∏_σ∈(K'/K)_0 ⊗_B_0 B_0' D_K',0 Now these are open sub-(topological non-commutative B_0-algebras) of C_K' and D_K', and in fact ϖ^n C_K',0=ϖ^nD_K',0, with equality because of the B_0-linearity, form a neighbourhood basis on either side. This shows equality of the topologies. There is also an isomorphism of K-algebras K'⊗_K K' ⊗_K K' ≅∏_(σ,τ)∈(K'/K)^2 K', a⊗ b⊗ c ↦ (aσ(b)σ(τ(c)))_σ,τ∈(K'/K), resulting in an isomorphism of non-commutative B-algebras ⊗_B B'^⊗ 3≅∏_(σ,τ)∈(K'/K)^2⊗_B B'. In [0CDQ]Stacks, they use this to prove the equivalence of the cocycle conditions on either side, which we will not reproduce here.Finally, let's take a look at the groupoid structure. For α,α'∈ C_K'^ corresponding to β,β'∈ D_K'^, we have by definition _C_K'^(α,α'){c∈_K'^×|α(σ)=c^-1α'(σ) σ(c) for all σ∈(K'/K)} and _D_K'^(β,β'){d∈_K'^×| p_1^*(d)·β=β'· p_2^*(d)}. As composition on either side is simply given by multiplication inside _K'^×, it suffices to show that the condition imposed on c agrees with the one imposed on d. This is clear from the fact that the compositions ⊗_B B'⊗_B B'^⊗ 2∏_γ∈Γ B' are given by d↦∏_γ∈Γ d, resp. d↦∏_γ∈Γγ(d) for i=1 resp. i=2. (1') This proves (1), but we want to state two more precise variants of the above properties that will be useful in the proof of (3). Firstly, the two functions ϵ_1 C_K' →⊗_B B'^⊗ 3 α ↦ p_23^*α· p_12^*α - p_13^*α. ϵ_2 D_K' →⊗_B B'^⊗ 3 β ↦σ(α(τ))α(σ)-α(στ) are equal, or more precisely, commute with the isomorphism C_K'≅ D_K'. The fact that the preimages of 0 for both agree just amounts to the compatibility with the cocycle condition. That the functions themselvers agree can be understood as the fact that the isomorphism C_K'≅ D_K' does not only preserve the cocycle condition, it also preserves the deviation from it. The second remark is of a very similar nature, but for morphisms. Fix elements α,α'∈ C_K' corresponding to β,β'∈ D_K'. Then the functions ϵ_3_K' → C_K' c ↦ cα(σ)-α'(σ) σ(c) ϵ_4_K' → D_K' d ↦ p_1^*(d)·β-β'· p_2^*(d) are equal, i.e. commute with the isomorphism between the respective codomains. This can be seen as the isomorphism of groupoids also pereserving how far any given object in _K' is from being a morphism. (2) Choose a compatible system K⊂ K_1⊂ K_2⊂…⊂ L of finite subextensions such that ∪_n=1^∞ K_n=L and let B_n B⊗_K K_n. Then we have isomorphisms of non-commutative B-algebras D_L =⊗_B (_n B_n)^⊗ 2 =⊗_B _n (B_n^⊗ 2) =_n (⊗_B B_n^⊗ 2) =_n∏_σ∈(K_n/K)⊗_B B_n =C_L. Indeed, in the second equality we just pull the colimit out of the two factors of B_L⊗_B B_L, which is possible since every element of the tensor product is a finite sum of elementary tensors; similarly, for the third equality, we use that tensoring with commutes with filtered colimits. The fourth inequality is taken straight from (1), while the last one uses that every continuous map from Γ to a discrete topological space factors over a finite quotient of Γ.In fact, the colimit construction also takes care of the cocycle condition and the groupoid structure, but for the topology, one has to be a bit careful: Indeed, if we were to give C_L and D_L simply the colimit topology, one could take for increasing K_n smaller and smaller open neighbourhoods of 0 (each containing the previous one of course). The result will be an open neighbourhood in the colimit topology by definition, but not in C_L. So instead of considering the colimit topology, we will verify that this is a homeomorphism by hand, which can be done just the same way as in the proof of (1). Indeed, define as above B_L,0=B_L⊗__K_L for _L the integral closure of _K in L. Then again B_L,0 is a ring of definition of B_L, stable under the action of Γ, and we get an isomorphism of non-commutative B_0-algebras C_L,0_0⊗_B_0 B_L,0^⊗ 2≅_n ∏_(K_n/K)_0⊗_B_0 B_n,0 D_L,0. Now ϖ^n C_L,0⊂ C_L and ϖ^n D_L,0⊂ D_L are neighbourhood bases of zero on either side, and they agree by B_0-linearity, thus showing the equivalence of topologies. (3) the isomorphism Ĉ_L≅D̂_L of topological non-commutative B-algebras follows from the isomorphism C_L≅ D_L by passing to completions. Indeed, it is enough to show that Ĉ_L is complete as a topological group and that C_L⊂Ĉ_L is dense, and similarly for the D_L⊂D̂_L. Completeness is clear and denseness is <Ref>. Note the subtlety that we must approximate a map Γ→_L which is continuous for the analytic topology on the right by maps Γ→_L which are continuous for the discrete topology on the right. Due to the particular nature of the topology on Γ, this can be archieved by an ϵ/2-argument. To show that the isomorphism Ĉ_L≅D̂_L is compatible with the subgroups of the unit group Ĉ_L^ and D̂_L^, consider an element α∈Ĉ_L^× and approximate it with a sequence α_n∈ C_L, each element defined over B_n=B⊗_K K_n for some finite subextension K⊆ K_n⊂ L. These correspond under the isomorphism C_L≅ D_L to elements β_n∈ D_L which converge to the β∈D̂_L corresponding to α under the isomorphism Ĉ_L≅D̂_L. Define maps ϵ_1Ĉ_L→⊗_B B̂_L^⊗ 3, ϵ_2D̂_L→⊗_B B̂_L^⊗ 3 as in (1'). Now: α satisfies the cocycle condition ϵ_1(α)=0 lim_n→∞ϵ_1(α_n)=0 lim_n→∞ϵ_2(β_n)=0 ϵ_2(β)=0 β satisfies the cocycle condition, with the middle equivalence coming from the observation in (1'). By entirely the same argument, one sees that some c∈_L̂ is a morphism in C_L̂^ if and only if it is a morphism in D_L̂^. For citability, let's point out what is acutally important in the above theorem. Let K, L, B, B̂_L and Γ be as specified in the beginning of the subsection. Let P be a finite projective B-module. Then there is an equivalence of groupoids between: * Descent data over B̂_L of the B̂_L⊗̂_B B̂_L-module P ⊗_BB̂_L⊗̂_B B̂_L. * Continuous cocycles Γ→_B̂_L(P⊗_B B̂_L) with morphisms between two cocycles α,α' given by elements c∈_B̂_L(P⊗_B B̂_L) such that α(σ)=c^-1α'(σ) σ(c) for all σ∈Γ (i.e., c witnesses that α and α' are cohomologous). Under this equivalence, a descent datum is effective if and only if its corresponding continuous cocycle vanishes on an open normal subgroup of Γ. The equivalence follows from the isomorphism of groupoids Ĉ_L^≅D̂_L^ in <Ref> (3) applied to =_B(P). The last statement is <Ref> 2. a) b) §.§ A Descent result via Sen operators Heavily inspired by the proof of [Theorem 2.3.5]PR, we want to use the results of Sen and the translations proved in the last subsection to prove <Ref>. Roughly speaking, it tells us when effectivity of a descent datum of vector bundles along certain pro-étale covers of adic spaces can be tested both generically and pointwise. Fix: * K a discretely valued p-adic field, * K_∞/K an infinite completely ramified Galois extension, * Γ(K_∞/K) the Galois group, * K̂_∞ the completion, * (B,B^+) a reduced Tate Huber pair over (K,_K), * B_∞ B⊗_K K_∞, B^+_∞ B^+⊗__K,_K_∞. These are non-complete topological ring. * (B̂_∞,B̂^+_∞) (B,B^+)⊗̂_(K,_K) (K̂_∞,_K_∞) the completed version, * M̂_∞ a finite-projective B̂_∞-module, * A completed descent datum of M̂_∞ over B, i.e., an isomorphism β between the two pullbacks of M̂_∞ to B̂_∞⊗̂_BB̂_∞ satisfying a cocycle condition over B̂_∞⊗̂_BB̂_∞⊗̂_B B̂_∞. Assume there exists an open dense subset U⊆(B,B^+) such that for each x∈ U with residue field κ(x), the κ̂(x)^_∞=κ(x)⊗̂_B B̂_∞-module[As in the previous subsection, we don't need to complete our tensor products when the module is finitely presented.] M̂_∞⊗_B̂_∞κ̂(x)^_∞, with descend isomorphism β_x obtained from the one over B, descends to a κ(x)-module M_x. Then M̂_∞ with its descent datum descends to a B-module M. * Beware that the descent datum is still very much given over all of (B,B^+), it is just effectivity which we can tested pointwise and generically. * Also note that if B is sousperfectoid and B̂_∞ is perfectoid (which is the situation where we will apply this result), then by <Ref> (1) the functor from finite projective B-modules to descent data on B_∞ is fully-faithful. For more general B and B̂_∞ this is probably false. * Full-faithfulness of the pullback functor for κ(x) in place of B would also imply that for every x the isomorphism of descent data (M⊗_B κ(x))⊗_κ(x)κ̂(x)_∞ M_x⊗_κ(x)κ̂(x)_∞ descends to an isomorphism of κ(x)-vector spaces M⊗_B κ(x) M_x, so that our big descended module M would specialize to all the M_x. However, this is neither clear from our proof nor will it be relevant for us. For the purposes of the Theorem, the pointwise descent is really used as a property, not as a datum. We can assume that there is always a “model" of M_∞ over some finite subextension, by which we mean the following: There is a finite subextension K⊂ K'⊂ L and a finite projective B'=B⊗_K K'-module P such that M̂_∞≅ P⊗_B'B̂_∞. Note that the isomorphism M̂_∞≅ P⊗_B'B̂_∞ does not have anything to do with our descent isomorphism β, so it is still far from solving the descent problem. As a first step, we want to find a finite projective B_∞-module P_∞ whose completion (in other words, its base change to B̂_∞) is isomorphic to M̂_∞. [Corollary 2.1.22 (c)]BC22 gives us such a P_∞, under the condition that B_∞ contains a non-unital open subring B_∞' such that: * B_∞' is henselian as a non-unital ring. This property is discussed in [2.1.1]BC22, but a sufficient condition for a non-unital ring to be henselian is if it can be realized as the ideal of a henselian pair. * The subspace topology on B_∞' is linear, i.e. admits a neighbourhood basis of 0 consisting of ideals in B_∞'. We will show that these properties are satisfied for B'_∞ϖ B_∞,0, where ϖ∈ B is a pseudouniformizer (it will then automatically be a pseudouniformizer for B_∞) and B_∞,0⊂ B_∞ is a ring of definition, which we assume to be obtained by base change from a ring of definition B_0 of B. Now ϖ B_∞,0 clearly satisfies the second property above. For the henselian property, note that (B_0,ϖ B_0) is a henselian pair since B_0 is ϖ-adically complete. But B_0→ B_∞,0 is the base change of the integral morphism _K→_K_∞ and hence and integral morphism itself, and henselian pairs get preserved by integral morphisms, so (B_∞,0,ϖ B_∞,0) is a henselian pair and thus ϖ B_∞,0 is henselian in the above sense.This proves the existence of P_∞. Going further, we have an isomorphism B_∞=_K⊆ K_n⊂ L B⊗_K K_n, with the colimit running over all finite subextensions. This translates into a filtered colimit of the (equivalence classes of) finite projective modules, with the transition maps given by base change: As each finite projective B_∞-module can be described by finitely many generators and relations, there must be some B_n B⊗_K K_n where all or these relations can be defined. Thus we find a B_n and a finite projective B_n-module P which after base change to B_∞ becomes isomorphic to P_∞, and after completing becomes isomorphic to M̂_∞, as desired. Note that in order to descend M̂_∞ down to B it is actually enough just to descend it to B_n B⊗_K K_n for some finite subextension K⊆ K_n⊂ K_∞: Once we have descended our M̂_∞ with its descent datum down to B_n, we can use étale descent to descend it all the way down to B. Combining this with <Ref>, we can from now on replace K by K_n and B by B_n. This allows us to assume that M̂_∞ is actually obtained as a completed base change of a finite projective B-modules P. In §2.3 Sen, Sen developed a theory to test triviality of a cocycle on open normal subgroups by a vanishing of a certain endomorphism, in fact in much more generality than is needed here. Let's briefly sketch his construction. Let αΓ→_B̂_∞(M̂_∞) be a continuous cocycle. One then finds some finite subextension K⊆ K'⊂ K_∞ and a continuous cocycle ρ(K_∞/K')→_B(P)⊗_B K' such that the composition (K_∞/K')_B(P)⊗_B K'↪_B̂_∞(M̂_∞) is cohomologous to α|_(K_∞/K'), i.e. ρ(σ)=m^-1α(σ) σ(m) for some m∈_B̂_∞(M̂_∞) and all σ∈(K_∞/K'). One then defines an operator ϕϕ_αlim_(K_∞/K')∋σ→ 1m logρ(m) m^-1/logχ(σ), where log is the p-adic logarithm and χ(σ)(K_∞/K)_p^* is the p-adic cyclotomic character. (Sen) This construction enjoys the following properties: * ϕ exists and is independent of the choices made. * ϕ is compatible with base change in B. * ϕ=0 if and only if α is cohomologous to the trivial cocycle on some open normal subgroup of Γ. The first and third property are proven by Sen in Sen, the second one is clear from the construction. (of <Ref>): Let α be the cocycle associated to our descent isomorphism β via <Ref>, and consider its Sen oparator ϕ_α. By our assumption, we know that in any x∈ U, the base changed datum (M̂_∞⊗_B κ̂(x)_∞,β_x) descends to some M_x. But this effectivity implies that the cocycle α is trivial on an open normal subgroup by the second part of <Ref> and hence by <Ref>, ϕ_α_x=(ϕ_α)_x vanishes. So ϕ_α is an endomorphism of a finite projective module vanishing pointwise on an open dense subset of the adic space associated to a reduced algebra, and it thus vanishes identically. Another application of <Ref> now implies that α is cohomologous to zero on an open normal subgroup of Γ and thus the reverse direction of the second part of <Ref> implies that descent is effective. §.§ The equivalence away from p=0 The goal of this subsection is to use the Sen formalism established above to prove the following proposition, which can be regarded as a rational version of our main theorem under extra assumptions on A^+[This extra assumption guarantees that p gets inverted in a transcendental way in (A^+)×̇(_p). Dropping this assumption requires some subtle reflections on meromorphicity conditions as in GI23; in Ans22, the situation is somewhat simpler as only single points (k) are considered there.]: Let A^+ be an integral perfectoid topological ring with pseudouniformizer ϖ such that (A,A^+)(A^+[1/ϖ],A^+) is a perfectoid Huber pair. Then the functor vector bundles on (A^+)×̇(_p) [r] rational -bundles on (A^+), is an equivalence of categories. Full-faithfulness follows formally from full-faithfulness of the functors from vector bundles to v-bundles ((A^+)×̇(_p)=(A,A^+) is sousperfectoid) and from -bundles to v-bundles, <Ref> (1) and (2), respectively. For essential surjectivity, we use our Sen theoretic formalism. (A,A^+) has an analytic cover U^m→(A,A^+) where U^m(B^m,B^m+)(W(R^+)⟨[ϖ^♭]/p^m⟩[1/p],W(R^+)⟨[ϖ^♭]/p^m⟩). In the notation of SW20, U^m=_[1/m,∞]. In turn, every U^m has a pro-étale cover by an affinoid perfectoid U_∞^m=(B̂^m_∞,B̂^m+_∞)→ U^m with B̂_∞^m+ B^m+⊗̂__p_∞, B̂_∞^mB̂_∞^m+[1/p], where as usual _∞=_p[p^1/p^∞]^∧ p.By full-faithfulness of the functor to v-bundles, <Ref> (2), it is enough to show that if is a v-bundle that lies in the essential image of the functor from -bundles, then it also lies in the essential image of the functor from vector bundles. Now we can evaluate at the affinoid perfectoid U_∞^m to get an actual vector bundle M̂_∞=|_U_∞^m on U_∞^m with (completed) descent information over U^m, in other words: A finite projective B̂_∞^m-module, also denoted M̂_∞ with (completed) descent information over B^m. By <Ref>, it is enough to show the descent is effective on the open dense subset U^m- {[ϖ^♭]=0}. But as lies in the essential image of the functor from rational -bundles, we can simply evaluate it at (A,A^+) to get an actual vector bundle on (A,A^+), and (A,A^+)∩ U^m=U^m-{[ϖ^♭]=0}, so M̂_∞ descends to a finite projective B-modules M. The v-bundle associated to M is of course still |_U^m, and using full-faithfulness of the functor from vector bundles to v-bundles for sousperfectoids again, they glue for varying m to a single vector bundle over (A,A^+) whose associated v-bundle is still . Note that we have only really used that we can test effectivity of the descent generically; on the other hand, in PR, it was only used that we can test effectivity pointwise. §.§ Finishing the proof Let's finally formulate and proof our main theorem: Let R^+ be any p-complete ring, carrying any topology making (R^+,R^+) a Huber pair (complete and living over (_p,_p) by convention). Then there is a functor -bundles on R^+ →integral -bundles on (R^+) defined as follows: For any map from a representable (B,B^+)→(R^+), we get an untilt (B^♯,B^+♯) with a map B^+♯→ R^+. As B^+♯ itself is integral perfectoid, our -bundle defines a vector bundle on (B^+♯)=(B^+). Base changing this along the map of adic spaces (B,B^+)→((B^+)) gives us a vector bundle on the former, which, upon varying (B,B^+), is exactly what we need to define a -bundle on (R^+). Let R^+ be a complete topologocial ring carrying the Π-adic topology for some Π∈ R^+ dividing p. Then the functor -bundles on R^+ →integral -bundles on (R^+) constructed above is an exact tensor equivalence. In particular, if R^+ is integral perfectoid, vector bundles on (R^+) are in equivalence to integral -bundles on (R^+) by <Ref>. Since the left side does not depend on the choice of topology on R^+, i.e., the choice of Π, neither does the right side. This came as quite a surprise to me, and frustratingly I haven't been able to come up with a more direct way to see this, even in simple cases. Because of the exactness and the equivalence of the two definitions in <Ref> resp. <Ref>, we will get the following for free: For any affine flat group scheme /_p, the construction above defines an equivalence of groupoids --bundles on R^+ →integral --bundles on (R^+) First, we need a result to reduce the situation to products of points. This is the refinement of [Lemma 2.2.3]CS21 alluded to earlier. Let R^+ be as in <Ref>. Then there is a topological R^+-algebra R^+→ A^+ such that: * A^+ is a product of valuation rings V_i with algebraically closed field of fractions. * A^+ is a complete topological ring carrying the (ϖ_i)_i∈ I-adic topology, where ϖ_i∈ V_i is some non-zero divisor dividing Π. In particular, A^+ is a Π-adically complete integral perfectoid. * R^+→ A^+ is a Π-complete arc cover of rings in the sense of <Ref>. * (A^+)→(R^+) is a v-cover of v-sheaves. In fact, for the obvious definition of Π-complete v-topology on Π-complete rings, the proof even shows that R^+→ A^+ is a Π-complete v-cover, but we only work with the Π-complete arc topology on the schematic side as this seems to be the most natural choice. Consider the set[The argument for finding the smallest valuation ring in each equivalence class just below shows that each such class has a representative which is κmax(ω, |R^+|)-small, so the equivalence classes indeed form a set, not a proper class.] I of equivalence classes of maps R^+→ V with V a Π-complete valuation ring. In each of these equivalence classes we can find a smallest valuation ring by taking any representative v R^+→ V and considering (⟨(v)⟩)∩ V⊂(V), with ⟨(v)⟩ the subring of V generated by the image of v.For every i∈ I let Ṽ_i be the Π-completion of the absolute integral closure of the associated smallest valuation ring as above. If Π gets mapped to a non-zero element in Ṽ_i, we can set V_i=Ṽ_i and ϖ_i∈ V_i the image of Π. If Π gets mapped to zero[as Π|p, this can only happen if p=0 in V_i], we cannot garuantee (2), so we define V_i as the t-adic completion of the absolute integral closure of Ṽ_i+t(Ṽ_i)[[t]]⊂(Ṽ_i)[[t]] and set ϖ_i t. The essential aspect of this construction is that it introduces a new element t which has smaller valuation than any element in Ṽ_i. Indeed, it can be easily read off from the construction that t is divided by any non-zero element of Ṽ_i. The result is a valuation ring with all the properties we demand and such that V_i/(ϖ_i)=Ṽ_i, so we get a splitting (*) ∏_i Ṽ_i →∏_i V_I →∏_i Ṽ_i, continuous for the Π-adic resp. (ϖ_i)_i-adic topology.So far we have an R^+-algebra A^+=∏_i V_i satisfying properties (1) and (2) (for the in particular statement in (2), note that we can replace ϖ_i by a p'th power root by absolute integral closedness to get an actual pseudouniformizer without changing the topology). For (3), note that every map R^+→ V to a Π-complete valuation ring V (so in particular to one of rank one) factors over some Ṽ_i, at least after exchanging V for its Π-completed absolute integral closure, which is an extension of valuation rings. Via projection to the i'th component we get a factorization over R^+→∏_i Ṽ_i, and precomposing this with our projection (*) we get a lift to ∏_i V_i, showing that R^+→ A^+ is indeed a Π-complete arc cover.Finally, for (4) it suffices to show that for all maps to products of points in the adic spaces sense (R^+,R^+)→((∏_j∈ J C_j^+)[((ϖ_j)_j∈ J)^-1],∏_j∈ JC_j^+) (S,S^+) with C_j^+[1/ϖ_j] algebraically closed there exists a lift of maps of Huber pairs ( A^+, A^+) [rd, dotted] (R^+,R^+) [r] [u] (S,S^+). By what we have seen in (3), we can lift every map R^+→ C^+_j to a map of rings ∏_i Ṽ_i→ C^+_j, producing a map of pairs of rings (∏_i Ṽ_i,∏_i Ṽ_i)→ (S,S^+) and this map will be continuous because R^+→ S^+ is: Indeed, for continuity we need that (ϖ_j)_j divides a power of the image of Π under the composition R^+→ A^+→ S^+, which follows from the fact that (ϖ_j)_j divides a power of the image of Π under R^+→ S^+ and commutativity of the diagram. Finally, precomposing with the section in (*) once again gives our desired lift. The following was proved by Gleason, building on a result by Kedlaya: ([Proposition 2.1.17]GleaPhD) Let A^+ be as in the previous Lemma. Then every vector bundle on (A,A^+) is free. In particular, as Γ((A,A^+),_(A,A^+))=(A^+), the base change functor from vector bundles on ((A^+)) to those on (A,A^+) is a (non-exact) equivalence of categories. Now let A^+ be any integral perfectoid, equipped with and complete with respect to the ϖ-adic topology for some non-zero divisor ϖ∈ A^+ dividing p. We want to construct a functor from the category of integral -bundles on (A^+) to the category of vector bundles on the adic space (A,A^+), where of course A A^+[1/ϖ]. Let be such an integral -bundle. We get * a rational -bundles _(0,∞] on (A^+), * an integral -bundle _[0,∞) on (A,A^+), * an isomorphism of rational -bundles on (A,A^+) between the relevant base changes of _(0,∞] and _[0,∞) by, respectively, <Ref> (4), pullback of bundles along maps of v-sheaves and compatibility of those two constructions, mentioned in the end of <Ref>. By separately passing all these through the equivalences of categories in <Ref> (3) and <Ref>, we get: * A vector bundle on (A,A^+), * A vector bundle on (A,A^+), * An isomorphism between the two restrictions to (A,A^+), which by the usual formalism of glueing vector bundles on adic spaces form a vector bundle on (A,A^+). Let A^+ be again as in <Ref>. Then the functor just constructed is an equivalence of categories. Let's first do essential surjectivity: By <Ref>, every vector bundle on (A,A^+) is free, so it is isomorphic to the image of the free integral -bundle of the same rank.Full-faithfullness is a bit more subtle and requires some preparation. Let (A^+,A^+)→ (S,S^+) be a map to a perfectoid Huber pair such that (S,S^+)→(A^+) is a v-cover of v-sheaves, possible by <Ref>. Consider the following cartesian diagram, where ϖ_A=(ϖ_i)_i∈ I in the notation of <Ref>: Ỹ(S,S^+)-V(p,[ϖ_A])[r, hook] [d] (S,S^+)X̃ [d] Y((A^+))-V(p,[ϖ_A]) [r, hook] ((A^+))X. Note that the diamantine version of the right map is the v-cover (S,S^+)×(_p)→(A^+)×(_p). Now the diamantine version of the whole diagram is still cartesian, so in particular, Ỹ^♢→ Y^♢ is also a v-cover.Now let _1,_2 be two integral -bundles on (A^+). Let us carefully analyze what kind of structures these _i give us on the different corners of the diagram. Firstly, by viewing the _i as v-sheaves on X and pulling back, we get v-sheaves on all four corners. Also note that the adic spaces Y,X̃ and Ỹ are sousperfectoid (Ỹ is because it is an open subspace of the sousperfectoid X̃), so the functors from vector bundles to v-bundles are fully-faithful by <Ref> (1). I claim that the (v-bundles associated to the) _i's, pulled back to either of these three corners actually lie in the essential image of the functor from vector bundles. In the case of Y, we can get the preimage vector bundle from <Ref>. In the case of X̃, the _i pull back to integral -bundles on X̃, so the claim holds by <Ref> (3). Finally, for Ỹ, we can pull back either of the vector bundles we just produced on X̃ or Y to get a vector bundle on Ỹ; By full-faithfullness of the functor from vector bundles to v-bundles on Ỹ, we get an isomorphism of vector bundles between them. This proves the claim for Ỹ. As there seems to be no risk of confusion at this point, we simply write these vector bundles as _i|_Y, _i|_X̃ and _i|_Ỹ.Now let α_1|_Y→_2|_Y be a morphism of vector bundles. We need to show that it extends to a unique morphism _1→_2 of integral bundles on (A^+)[Note that if the _i were already known to be obtained from (classical) vector bundles on (A^+) (which will follow from <Ref>) and we would ask for a unique extension of a map vector bundles, this would be a simple case of comparing global sections – in fact, this is exactly the in “particular" part of <Ref>. The fact that this extension remains unique in a more v-local setting is the enitre point of this Lemma.].Now, by <Ref> and our assumption on A^+, the _i|_Y are free. Hence the isomorphism α is given by a single matrix M over Γ(Y,_Y)=Γ(X,_X)=(A^+). We claim that the pullback α|_Ỹ_1|_Ỹ→_2|_Ỹ extends to a unique morphism β_1|_X̃→_2|_X̃; Let (U_j)_j⊂X̃ be an affinoid open cover trivializing both _i|_X̃. Now a candidate extension of α from U_j∩Ỹ to U_j is given by pulling the matrix M (whose coefficients live over X, remember) back to U_j⊆X̃. This clearly agrees on overlaps and extends α|_Ỹ; We want to show that it is in fact the only possible extension. Let ϖ_S be a pseudouniformizer of (S,S^+). By continuity and replacing ϖ_S by a p'th power root if necessary, we can arrange that ϖ_S|ϖ_A in S^+ (identifying ϖ_A with its image in S^+). This implies that we have inclusions of (pre-)adic spaces: _[0,1](S,S^+) =X̃⟨[ϖ_S^]/p⟩ ⊆X̃⟨[ϖ_A]/p⟩ ⊆X̃⟨p/[ϖ_A]⟩∪X̃⟨[ϖ_A]/p⟩=Ỹ ⊆X̃. Now the associated map of global sections of the composition of the inclusions is known to be injective as _[0,1](S,S^+) is a rational subset and both p and [ϖ_S] are non-zero divisors, and so this must also be true for the map of global sections associated to the last inclusion (note that the associated maps of rings point in the other direction). Intersecting the adic spaces with the open subspace U_j shows that Γ(U_j,_X̃)→Γ(U_j∩Ỹ,_Ỹ) is also injective and thus our extension of α|_Ỹ to X̃ is indeed unique.We now want to view this morphism _1|_X̃→_2|_X̃ as a morphism of integral -bundles on (S,S^+) and descend it down to X. For this, we need to show that the two morphisms _1|_X̃×_X X̃⇉_2|_X̃×_X X̃ obtained by pullback along either of the projection maps X̃×_X X̃→X̃ agree. This is however obvious as the morphism is defined by a matrix M which has coefficients in _X. (of <Ref>) By our <Ref> and the descent results <Ref> and <Ref>, we can immediately assume that R^+=A^+ is a product of points in the sense of the <Ref>, equipped with the ϖ-adic topology.Just for this proof, consider the category “_[0,∞]-bundles on (A,A^+)" consisting of triplets of the form (_(0,∞],_[0,∞),α) where * _(0,∞] is a rational -bundle on (A^+), * _[0,∞) is an integral -bundle on (A,A^+), * α is an isomorphism of rational -bundles on (A,A^+) between the relevant restrictions of _(0,∞] and _[0,∞). This is a fibre product of the three different categories of -bundles in the obvious way and was already seen implicitly in <Ref>. I have only made it explicit here since I think it makes it much more transparent what's going on.There is a commutative diagram of categories vector bundles on (A^+) [d] [r] integral -bundles on (A^+) [d] vector bundles on (A,A^+) [r] “_[0,∞]-bundles on (A,A^+)". The bottom arrow is an equivalence of categories by <Ref> and <Ref> (3) (which was already used in <Ref>), and the functor in <Ref> is the composition of the right functor with the inverse of the bottom one. This functor was shown to be an equivalence in <Ref>. The left arrow is an equivalence by the in particular part of <Ref>. Putting these together, we see that the top functor is an equivalence of categories.This leaves us with exactness, i.e., that a sequence of -bundles on A^+ is exact if and only if the associated sequence of -bundles is. By the exactness results of the first chapters, we can assume without loss of generality that our ring A^+ still has the form as in <Ref>. In this case by <Ref>, exactness of -bundles is the same as exactness of finite projective (A^+)-modules, which already implies the “only if" direction of the satement. For the if direction, assume our sequence of -modules is exact when viewed as -bundles, i.e., there is a cover by a representable (B,B^+)→((A^+)) such that the associated sequence of vector bundles on (B,B^+) is exact. We need to show that already the sequence of (A^+)-modules is exact. This is very similar to the proof of <Ref>: The map of underlying topological spaces of (B,B^+)=(_p)×̇(B,B^+)→(_p)×̇(A^+)=((A^+)) is surjective, so the exactness follows as ((A^+)) is reduced. §.§ Equivalence of Frobenius structures The functor in <Ref> extends to a functor perfect-prismatic F-crystals on R^+ →_n-Shtukas with fixed leg on (R^+). Indeed, the map (B,B^+)→((B^+)) from the previous construction commutes with Frobenii and sends the distinguished element d∈(B^+) to a distinguished element ξ in the global sections of (B,B^+), so we can pull back our Frobenius-linear isomorphism. We want to show that this is still an equivalence. In fact, let's state it directly for general groups : Let be any affine algebraic group over _p and let R^+ be a complete topologocial ring carrying the Π-adic topology for some Π∈ R^+ dividing p. Then the functor defined above induces an equivalence of groupoids perfect-prismatic -crystals on R^+ →-Shtukas with fixed leg on (R^+). As before, it suffices to show what we have an exact tensor equivalence for =_n. We have already showed the exact tensor equivalence of underlying bundles in <Ref>, so it suffices to show that we can descend the Frobenius linear morphisms. By our simultaneous v- and arc-covers from <Ref> in conjunction with the descent results <Ref> and <Ref>, it suffices to show this for rings A^+ as in <Ref>. In this case, our -bundle is known to be a free (A^+)-module, and so are all the |_(B,B^+) for maps from affinoid perfectoids (B,B^+)→(A^+), and we can even equip them with compatible bases (by pulling back the one on (A^+)), so our Shtuka structure is given by a compatible (with pullbacks) family of invertible matrices M_(B,B^+)∈_n(Γ((B,B^+),_(B,B^+))[1/ξ]), indexed by maps from not necessarily characteristic p affinoid perfectoids (B,B^+)→(A^+) (see <Ref> for why we allow mixed characteristic perfectoids).Thus what is left to show is that (1) every compatible family of M_(B,B^+) as above descends to a single matrix M_A^+ with coeffients in (A^+)[1/d] and (2) that the latter is invertible if the former is. As we can cover (A^+) by a single (B^_1,B_1^+) by <Ref>, we can multiply our sections with a single finite power of ξ to get a matrix with coefficients in Γ((B^_1,B_1^+),_(B^_1,B_1^+)). Using compatibility with pullbacks (for both the M_(B,B^+) and ξ=ξ_(B,B^+)), multiplication with the same power of ξ for any (B,B^+) produces sections M̃_(B,B^+)=ξ^nM_(B,B^+)∈ M_n× n(Γ((B,B^+),_(B,B^+))). But this is just a morphism of (free) -bundles over (A^+), so by the equivalence of categories <Ref>, it descends to a morphism of (A^+)-modules, again given in our basis as M̃_A^+∈ M_n× n((A^+)), which we can now divide in (A^+)[1/d] by d^n to get a morphism M_A^+=M̃_A^+/d^n which descends the M_(B,B^+). This shows (1). What is left to show is (2), i.e., that this descended morphism is an isomorphism in (A^+)[1/d] if the family of morphisms over Γ((B,B^+),_(B,B^+))[1/ξ] for varying (B,B^+) was. For this, it is enough to show that the map from (A^+)[1/d] to the compatible families of global sections of (B,B^+)[1/ξ] is injective, as this means that the inverse morphism M^-1_(B,B^+) at the Shtuka level (which we can of course descend just as well) is also an inverse of maps of [1/d]-modules. But indeed, already the map (A^+)→Γ((A,A^+),_(A,A^+)) for A=A^+[1/ϖ] is injective (remember that ϖ is a non-zero divisor), so the one to compatible families indexed over all (B,B^+) certainly is.
http://arxiv.org/abs/2307.01615v1
20230704100207
Testing Complex Singlet Scalar Cosmology at the Large Hadron Collider
[ "Wenxing Zhang", "Yizhou Cai", "Michael J. Ramsey-Musolf", "Lei Zhang" ]
hep-ph
[ "hep-ph", "hep-ex" ]
justification=RaggedRight arabic ACFI T23-03 ^1Tsung-Dao Lee Institute and School of Physics and Astronomy, Shanghai Jiao Tong University, 800 Dongchuan Road, Shanghai 200240, China ^2 Department of Physics, Nanjing University, 22 Hankou Road, Nanjing 210093, China ^3Amherst Center for Fundamental Interactions, Department of Physics, University of Massachusetts, Amherst, MA 01003, USA The Standard Model extended with a complex singlet scalar (cxSM) can admit a strong first order electroweak phase transition (SFOEWPT) as needed for electroweak baryogenesis and provide a dark matter (DM) candidate. The presence of both a DM candidate and a singlet-like scalar that mixes with the Standard Model Higgs boson leads to the possibility of a bb̅+MET final state in pp collisions. Focusing on this channel, we analyze the prospective reach at the Large Hadron Collider (LHC) for a heavy singlet-like scalar in regions of cxSM parameter space compatible with a SFOEWT and DM phenomenology. We identify this parameter space while implementing current constraints from electroweak precision observable and Higgs boson property measurements as well as those implied by LHC heavy resonance searches. Implementing a proposed search strategy, we find that the heavy scalar and DM candidate can be probed up to 1 TeV and 400 GeV at 2σ level respectively. Testing Complex Singlet Scalar Cosmology at the Large Hadron Collider Wenxing Zhang^1, Yizhou Cai^2, Michael J. Ramsey-Musolf^1,3, Lei Zhang^2 ============================================================================= § INTRODUCTION The origin of the cosmic baryon asymmetry is one of the long-standing puzzles in particle physics. Electroweak baryogenesis  <cit.> provides a promising solution and can be tested at the current collider experiments<cit.>. In general, a baryogenesis mechanism should meet Sakharov's three conditions <cit.>: * Baryon number violating interactions. * C and CP violation. * Departure from thermal equilibrium (or CPT violation). The baryon number violating processes could appear in Standard Model (SM) via the non-perturbative effects caused by sphaleron transitions  <cit.>. In principle, the requisite CP violation also appears in SM via Cabibbo-Kobayashi-Maskawa matrix, though the strength is found to be insufficient to generate the observed matter-antimatter asymmetry. In the SM, a possible departure from thermodynamic equilibrium could happen via a first order electroweak phase transition (FOEWPT) at the electroweak temperature, T_EW∼ 140 GeV, that marks the onset of electroweak symmetry-breaking<cit.>. To ensure preservation of any baryon asymmetry produced during this transition, the latter must be sufficiently strong. The occurrence of a FOEWPT requires the mass of the Higgs boson to lie below ∼ 70 GeV <cit.>, which is inconsistent with the experimental observation <cit.>. Therefore, electroweak baryogenesis can only be realised in extensions of SM that accommodate a strongly first order electroweak phase transition (SFOEWPT). The most widely considered scenarios include the real singlet extensions (xSM) <cit.>, complex singlet extensions (cxSM)  <cit.>, Higgs doublet extensions <cit.>, and supersymmetric extensions <cit.>. Among the distinctive signatures of such mixing is resonant di-Higgs production, where the the heavy resonance is a mixed singlet-doublet state <cit.>. The possibility of probing the SFOEWPT-viable parameter space in the xSM has been studied extensively(for example see <cit.> and references therein). In the cxSM after electroweak symmetry-breaking, the model yields both a viable DM candidate (A) as well as two real neutral scalar h_1 and h_2 that are mixtures of the SM Higgs boson and the real part of the complex singlet. In this case, the cxSM provides more collider phenomenological signatures than xSM, such as the presence of missing transverse energy (MET) associated with pair production of A, in conjunction with decay products of one of the mixed doublet-singlet states, h_1,2. When the DM mass is below half that of the SM-like state h_1, resonant di-Higgs production may be the dominant underlying process. However, for heavier DM, there exist a variety of other subprocesses that play an important role. Thus, the SFOEWPT-viable cxSM admits a richer collider phenomenology than the xSM. In what follows, we analyze the bb̅+MET final state and find that it provides a powerful probe of the realization of the cxSM consistent with a SFOEWPT and DM phenomenology. We consider both the resonant di-Higgs portion of parameter space, wherein m_A<m_h_1/2, as well as the heavier m_A regime. Present experimental constraints on h_1 invisible decays render the bb̅+MET signal to be rather weak in the m_A<m_h_1/2 regime. Consequently, we focus on the heavier m_A region. We find that there exist promising prospects for cxSM discovery for DM and h_2 masses up to 400 GeV and 1 TeV, respectively. The discussion of our analysis is organized as follows. Section <ref> introduces the framework of cxSM. Section <ref> discusses the experimental constraints on the mixing angles. Section <ref> describes the requirements to realise the SFOEWPT together with the cold DM candidate. Section <ref> discusses the remaining parameter space allowed by the measurements of the DM relic density and the Higgs boson invisible decay. Section <ref> discusses the exclusion of the parameter space from the latest LHC experiments. In section <ref>, we discuss the Monte Carlo simulation of b-jets plus DM candidates in cxSM and propose a search strategy for the corresponding signals at HL-LHC. Section <ref> is the conclusion. § THE CXSM MODEL The cxSM extends the SM by introducing a complex SU(2) singlet scalar S that transforms under a global U(1) group as S → Se^iα. The DM candidate emerges through two ways: (a) spontaneous breaking of the global U(1) symmetry, yielding a massless Nambu-Goldstone boson; (b) inclusion of explicit, soft U(1) breaking terms in the potential, as needed to generate a DM mass. One of the two degrees of freedom in S behaves like the real singlet of the xSM, and could mix with the SM Higgs boson and potentially catalyze a SFOEWPT. The other one becomes the cold DM candidate. We consider a technically natural soft symmetry-breaking and minimal renormalizable cxSM model that do not generate additional soft symmetry-breaking terms through renormalization. The scalar potential at the tree-level is <cit.> V_0(H,S) =μ^2/2 (H^† H) + λ/4 (H^† H)^2 +δ_2/2 H^† H |S|^2 + b_2/2 |S|^2 + d_2/4 |S|^4 + a_1 S + b_1/4 S^2 +h.c.. The first two lines in Eq. (<ref>) are invariant under the U(1) transformation. The a_1 and b_1 terms in the third line break the U(1) symmetry explicitly. In general, a_1 or b_1 can be complex numbers. Under redefinition of S the quantity ϕ_S≡Arg(b_1 a^*2_1) is a rephasing invariant complex phase. However, to obtain a viable DM candidate, mixing between the real singlet and imaginary singlet should be avoided, which requires ϕ_S=0. Therefore we fix a_1 and b_1 to be real numbers in the following studies. Spontaneous symmetry-breaking (SSB) is implemented via S =1/√(2)(v_s+s+iA), H =( G^+ 1/√(2)(v_0+h+iG^0) ) , where v_s and v denote the vacuum expectation values, G^0,± are the usual Higgs doublet would-be Goldstone bosons, and s and A denote the real and imaginary parts of the fluctation around the singlet vacuum expectation value (vev). Based on the U(1) symmetry breaking schemes, the model can be classified into four cases <cit.>: * v_s ≠ 0 and a_1 ≠ 0,  b_1≠0. The U(1) symmetry is both spontaneously and explicitly broken. We may take Im(S) to be the pseudo-Goldstone boson that is no longer massless, with its mass depending on the extent of explicit breaking via the values of a_1 and b_1. Note that the domain wall problem would appear if a_1 vanishes since a discrect Z_2 symmetry breaks spontaneously in this case. * v_s=0 and a_1=b_1=0. U(1) symmetry is kept. A and s are identical and massive particles, such that the model is degenerate to xSM. Since the U(1) symmetry is preserved, the singlet does not mix with SM Higgs and become two stable particles. In this case, we have two DM candidates. Comparing with the xSM, the DM relic desity is equal to twice of the xSM case. * v_s=0 with b_1≠0. The U(1) symmetry is explicitly broken. The scalar S has no mixing with SM Higgs, such that s and A are both stable massive particles. Note that the a_1 term is mainly to avoid a potential domain wall problem for the case when v_s≠0 as the first case. Here we can set it to be zero since we do not have SSB and, thus, no domain wall problem in this case. * v_s ≠ 0 and a_1 = b_1= 0. The U(1) symmetry is spontaneously broken, yielding a massless Nambu-Goldstone boson, which we may take to be Im(S) and which becomes a possible warm DM candidate. However, such possible candidate has been ruled out suppose the warm DM candidate mass of the range 𝒪(1) GeV <cit.>. In the following studies, we will focus on the most general scenario where v_s ≠ 0 and a_1 ≠ 0,  b_1≠0. By using the minimization condition of the potential in SSB, we get μ^2 = 1/2(-v_s^2 δ_2 - v_0^2 λ) Σ_12 = -4√(2)a_1-d_2 v_s^3 -v_0^2 v_s δ_2/2 v_s, where Σ_12 is defined as Σ_12=b_1 + b_2. Hence we can write down the scalar masses m_A^2=-√(2)a_1/v_s-b_1, and ℳ_h^2 ≡( M^2_h M_hs M_sh M^2_s) =( 1/2λ v^2_0 δ_2/2 v_0 v_s δ_2/2 v_0 v_s 1/2d_2 v^2_s-√(2)a_1/v_s), which can be diagonalized by an orthogonal matrix O(θ): O(θ)^T ℳ_h^2 O(θ) = ( m^2_h_1 0 0 m^2_h_2),      O(θ) = ( cosθ -sinθ sinθ cosθ). Specifically, the fields are expressed in terms of mass eigenstates and the mixing angle as h =cosθ h_1 - sinθ h_2, s =sinθ h_1 + cosθ h_2. The diagonal matrix, O(θ)^T ℳ_h^2 O(θ), gives three equations that express λ, δ_2 and d_2 in terms of m_h_1, m_h_2, a_1, v_0, v_s and θ. δ_2 = sin 2θ (m^2_h1-m^2_h2) /v_0 v_s λ = 2(m^2_h_1cos^2θ + m^2_h_2sin^2θ)/v^2_0 d_2 = 2(√(2)a_1 + m^2_h_2v_s cos^2θ + m^2_h_1v_s sin^2θ)/v^3_s Meanwhile, the parameters b_1 and b_2 are related to the input parameters above and the DM mass, m^2_A. b_1 = -√(2)a_1-m^2_A v_s/v_s,    b_2=Σ_12-b_1 So far, we have two known parameter, v_0 and m_h_1, and five free parameters m^2_A, m^2_h_2, a_1, θ and v_s. Moreover, the coefficients of quartic terms should be bounded from below. We express the scalar fields as h=φsinα and s=φcosα. It is convenient to express the effective potential for a general values of α and φ as <cit.> V_eff(φ, α, T)=Aφ^4 + B̅φ^2 + C̅ T^2 φ + Dφ + const., where the A, B̅, C̅ and D are massive couplings related with v_s, v_0 and T. The bar stands for that the quantity is obtained from high-T approximation. Here, the tree-level quartic coupling is A = 1/16(λcos^4α + 2δ_2 cos^2αsin^2α + d_2sin^4α). To guarantee the potential is bounded from below in any direction of h-s plane, it must satisfy λ > 0, d_2 > 0 and δ_2 > -λ d_2 for negative δ_2. In addition, the requirement of positive eigenvalues of mass-squared matrix in Eq.(<ref>) leads to λ(d_2 - 2√(2)a_1/v^3_s) > δ_2^2 for non-zero a_1 <cit.>. It is useful to show the field dependent scalar masses that will be used in the calculation of the high-temperature Lagrangian. Before electroweak symmetry breaking, the field dependent masses are m_G^±,0^2= ∂^2 V_0/∂G^±,0^2= 1/4(2μ^2+s^2 δ_2 + λ h^2 ), m_A^2=∂^2 V_0/∂A^2=1/4(-4b_1 + d_2 s^2 + 2Σ_12 + h^2 δ_2 ), ℳ_h^2=( 1/4(2μ^2+s^2δ_2 + 3 h^2 λ) h sδ_2/2 h sδ_2/2 1/4(3d_2 s^2 + 2Σ_12 + h^2 δ_2) ). Combining all the field dependent terms together and ignoring those field-independent terms, such as μ^2 T^2 and b_1,2 terms, we could obtain the high-T approximation potential as discussed in detail in Sec. <ref>. § CONSTRAINS ON PARAMETERS AND BENCHMARKS We first discuss the constraints of the mixing angle θ in Eq. (<ref>) since it is an essential parameter in cxSM for the dark matter candidate, EWPT and collider phenomenology. The mixing angle θ is constrained by the electroweak precision observables (EWPO) and the global Higgs measurements at LHC. Note that during writing this paper, the CDF experiment reported a new W mass measurement m_W = 80.4335 ± 0.0094 GeV <cit.>, which is about 7σ away from the SM prediction. Given that there exists some tension between this result and other experimental results, e.g. ATLAS experiment <cit.>, we prefer to not include an analysis of its implications in this paper. We defer such an analysis to a dedicated study in the future. §.§ Electroweak Precision Observables (EWPO) The limits on the scalar mixing angle from precision electroweak measurement can be studied by assuming the extended scalar mainly contribute to the gauge boson self-energy functions. Modifications of the oblique parameters S, T and U <cit.> are induced due to the coupling difference between h_1 VV and the SM coupling hVV and due to additional contributions arising from h_2 via mixing. Indeed, since the new BSM particle is a gauge singlet, no further contributions come from the gauge sector except of those associated with the the mixing angle sinθ. Therefore, the deviation of EWPO operators can be expressed as <cit.> Δ𝒪 = 𝒪_cxSM-𝒪_SM = cos^2θ 𝒪(m_h_1) + sin^2θ 𝒪(m_h_2) - 𝒪(m_h_1) = sin^2θ[𝒪(m_h_2)-𝒪 (m_h_1)] where m_h_1 and m_h_2 are the masses of the two mass eigenstates in Eq. (<ref>) and h_1 is the observed Higgs boson with m_h_1≈ 125 GeV. Hence the deviation of a given oblique parameter 𝒪 on EWPO from its SM value, including Δ S, Δ T and Δ(U+S), is contingent upon two free parameters: θ and m_h_2. For completeness, we provide explicit expressions in terms of the Passarino Veltman functions in Appendix. <ref>. The best-fit values of S, T and U with respect to the SM prediction <cit.> are S-S_SM = 0.04 ± 0.11 T-T_SM = 0.09 ± 0.14 U-U_SM = -0.02 ± 0.11 To perform the parameter scan with these experimental constraints, the χ^2 is constructed as χ^2 = (X-X̂)_i (σ^2)^-1_ij (X-X̂)_j, where the vector X_i=(S,  T,  U) and (X-X̂)_i=(Δ S,  Δ T,  Δ U) are derived from Eq. (<ref>) and defined to be the corresponding central values of the shift from SM predictions in Eq. (<ref>). The quantity σ^2 is the error matrix which can be expressed as σ^2_ij=σ_i ρ_ijσ_j. Here, the σ_i is the uncertainty of (X-X̂)_i in Eq. (<ref>), and ρ_ij is the correlation matrix <cit.> with ρ_ij= ( 1 0.92 -0.68 0.92 1 -0.87 -0.68 -0.87 1 ). Fig.<ref> shows χ^2 distribution of the 2-D parameter scan. The pink solid curve indicates the upper limit on the mixing angle sinθ at 95% C.L. as a function of m_h_2. From the plot, we could see that the mixing angle |sinθ| is excluded above 0.35 for m_2 ≤ 400 GeV and 0.25 for m_2 ≥ 600 GeV. In the following section, we focus on the absolute value of the mixing angle lower than 0.35. §.§ Measurements of the Higgs boson couplings The mixing angle θ between h_1 and h_2 describes the coupling between SM-like Higgs boson and other SM particles and thus is constrained by measurements of the experimental measurement of the Higgs boson coupling. This section will derive the 95% C.L. upper limit on sin^2θ by performing a global fit to the latest ATLAS measurements <cit.>. To characterize the impact of the cxSM on properties of the 125 GeV Higgs-like boson, it is useful to consider the signal strength, defined as μ_pp→ h_1 → XX = σ_pp→ h_1 BR(h_1→ X X)/σ^SM_pp→ h BR(h → XX)_SM, where σ_pp→ h_1 = cos^2θ×σ^SM_pp→ h is considered in tree-level. By using the the decay width relationship between the SM-like Higgs and SM Higgs: Γ_h_1 → XX = cos^2θ Γ_h → XX, the branching ratio of the SM-like Higgs boson decay can be expressed as: BR(h_1 → XX) = BR(h → X X)_SM, . This equation is valid in the parameter space relevant to the present study , where m_h_2 is greater than m_h_1 and m_A is greater than m_h_1/2. In this case, both Γ_h_1 → A A and Γ_h_1 → h_2 h_2 vanish, and therefore μ_pp→ h_1 → XX=cos^2θ. To quantify cxSM-induced deviations from SM Higgs boson properties, we construct the χ^2 function for μ_i→ h_1 → f, where the subscript "i" stands for the production mode (e.g., gluon-gluon fusion) and f indicates the decay mode: χ^2 = ∑_i,f(μ^cxSM_i→ h_1 → f-μ^obs_i→ h_1 → f)^2/σ^2_μ_i → h → f, where all the channels tested at current LHC are considered and translated into a 95% C.L. upper bound on sin^2θ, which translate the deviation of χ^2 to be Δχ^2 ≤ 3.841. This is translated into an upper bound on sin^2θ, with sin^2θ < 0.125, which is calculated based on the current global Higgs fit results summarised in Tab. <ref>. § SFOEWPT AND NUMERICAL RESULTS In this section, we consider the gauge-independent 𝒪(T^2) high temperature (high-T) approximation of the finite temperature effective potential. We start with the expansion V_eff(h,s,T)=V_0(h,s)+V^T=0_CW(h,s)+V_T≠ 0(h,s,T). V^T=0_CW is the zero-temperature Coleman-Weinberg effective potential with the general form V_CW= ∑_k (-1)^2s_k/64π^2 g_k  [M^2_k]^2(logM_k^2/μ^2+c_k), where s_k is the spin of the k-th particle; g_k indicates the number of degrees of freedom; c_k is equal to 3/2 for scalars and fermions, and 5/6 for vector gauge bosons. The quantity V_high-T is the effective potential at finite-temperature approximation at leading order in the finite temperature effective theory. It can be obtained from the conventional one-loop thermal potential V_T^1-loop=T^4/2π^2∑_kn_k J_B,F(m_k^2/T^2) with J_B(m^2_k/T^2) =-π^4/45+π^2/12m_k^2/T^2-π/6((m_k^2)^3/2/T^3)    -m_k^4/32T^4 log(m_k^2/c_B T^2) J_F(m^2_k/T^2) =-7π^4/360-π^2/24m_k^2/T^2-m_k^4/32T^4 log(m_k^2/c_F T^2), where log c_B=5.4076 and log c_F=2.6351. Field-dependent logarithms in V_high-T are cancelled by V_CW with a factor of form ln(T^2/μ^2) left. In principle, one can choose the renormalization scale to be μ∝ T, so that the log-term is temperature independent. Moreover, at leading order in the high-temperature limit, the leading order of V_T≠ 0 is field independent and thus ignored. Therefore, we keep the the second order in V_T≠ 0 that is proportional to T^2. In this case, the Coleman-Weinberg potential is proportional to M^4_k, which is negaligable in high-T approximation, T ≫ M_k. In this paper, we use the high-T approximated potential without including the subordinate Coleman-Weinberg potential, V^High-T(h,s,T) =V_0(h,s) +T^2/48(12m_t^2) + T^2/24(3m_G^2+m_h^2+m_s^2+m_A^2+ 6M_W^2+3M_Z^2) =1/2(λ/8 + δ_2/24 + 3g^2_2 + g^2_1/16 + y^2_t/4) h^2 T^2    + δ_2 + d_2/48 s^2 T^2. The m_G^2, m_s^2, m_A^2 and m_h^2 are field-dependent masses of the fields that interacts with the scalar fields h or s defined in Eq. (<ref>). Note that Eq.(<ref>) is already gauge independent thanks to the gauge-invariant thermal masses <cit.>. Thus the critical temperature defined by high-T approximation is also gauge independent. In the presence of the additional neutral scalar and the portal interaction, spontaneous symmetry breaking (SSB) can take place via multiple ways <cit.>: (a.) a single-step transition to the present pure Higgs vaccum from the symmetric phase at T=T_EW. (b.) The universe first lands in a phase with a non-zero v_s at T>T_EW followed by a transition to the current Higgs vaccum at T_EW. (c.) A one-step transition to where both the SM Higgs and the real singlet obtain vevs. The first order EWPT can be induced at tree-level in the high-T approximated Lagrangian under certain conditions, where the situation is classified according to the number of transition steps. We discuss these possibilities below. In so doing, we first observe that a first order EWPT for scenario (a) requires that thermal loops containing the singlet scalar sufficiently enhance the term in V_eff proportional to Th^3. We do not consider this possibility here. For a discussion, see, e.g., Ref <cit.> and references therein. For the two-step phase transition, as shown in Fig. <ref>, the singlet scalar vev first moves from O^' to A, where <s> = v_S^A/√(2) and <h>=0; the SM Higgs then also obtains its vev in the second step from A to B, where <s> = v_S^B/√(2) and <h>=v_C/√(2). For the second step, we denote the critical temperature as T_C, such that the strong first order electroweak phase transition can be approximated by v_C/T_C ≳ 1 with <cit.> v_C ≃√(2δ_2 v^A_S/λ(v^A_S(T_C)-v^B_S(T_C))) T_C ≃√(1/2Σ_H(-μ^2-v^A_S(T_C)^2/2δ_2)), where Σ_H=λ/8+δ_2/24+3g^2_2+g^2_1/16+y^2_t/4. In addition, δ_2 can be expressed as δ_2 = 2/v_0 v_s(m^2_h_1-m^2_h_2) sinθcosθ. A positive δ_2 can generate a barrier between two minima and therefore induce a first order EWPT, where a positive v̅^A_S-v̅^B_S is required by Eq. (<ref>). For the purpose of collider phenomenology, we will focus on the heavy Higgs search at HL-LHC in the following section, such that a heavy scalar resonance with m^2_h_2 > m^2_h_1 is considered. Thus, as implied by Eq. (<ref>) the heavy scalar requirement requires a negative mixing angle, θ, in order for δ_2>0. Moreover, as shown in Eq. (<ref>), a positive δ_2 indicates an upper limit of itself. For an one-step phase transition wherein v_0 and v_S vary from zero to nonzero at the same time, the situation is complex. If we consider the high-T effective theory without the thermal loop-induced cubic term – as what we performed above, such a one-step transition cannot be first order since v_C is always zero. This can be seen from Eq. (<ref>) with v^A_S replaced by zero. In principle, introducing the thermal cubic term can generate first order phase transition <cit.>. With the foregoing considerations in mind, we will focus in this paper on the two-step phase transition. The CosmoTransitions <cit.> package is used to numerically evaluate the EWPT quantities, e.g. T_c and the corresponding vevs, and then locate the feasible parameters space for the strong first-order EWPT. § CONSTRAINTS ON DARK MATTER CANDIDATE For the pseudoscalar A, since it does not mix with other scalars due to its CP-odd nature, this particle is stable and can be regarded as a dark matter candidate. However, the δ_2 term in Lagrangine generates an interaction of g_1AA· h_1 A A, which can contribute to the Higgs invisible decay if the m_A is less than half of the Higgs mass. Given no significant Higgs invisible decay is observed, this indicates either the coupling strength, which can be expressed as: g_h_1 AA = √(2) a_1 + m^2_h_1 v_s/2 v^2_ssinθ , is highly suppressed or the m_A close to or even heavier than m_h_1/2. To be specific, we redefine a_1=γ^3 m_h_1^3,  v_s=β m_h_1 by introducing γ and β, the invisible decay width can be expressed as Γ_h_1 → A A =g^2_h_1 AA/8π m_h_1√(1-4m^2_A/m^2_h_1) =m_h_1/8π(√(2)γ^3+β/2β^2)^2 √(1-4m_A^2/m_h_1^2) sin^2θ ∼(√(2)γ^3+β/2β^2)^2√(1-4m_A^2/m_h_1^2)(sinθ/0.1)^2 × 50 [MeV], where the approximation in the last row is obtained by taking |sinθ| = 0.1. The current observed upper bound on the branching ratio of Higgs invisible decay at LHC experiments is about 13% for ATLAS <cit.> and 16% for CMS <cit.>. While the total decay width for SM Higgs is about 4.1 MeV <cit.>, and notice that in Eq. (<ref>), the factor (√(2)γ^3+β/2β^2)^2 ∼𝒪(1) is satisfied for β∼𝒪(1) and |γ| ∼𝒪(1), which indicates a narrow window for DM mass around m_h_1/2 or a delicate fine tuning cancellation between a_1 and v_s. Taking into account above considerations and without loss of generality, we also consider the dark matter particle A in the range 60 GeV≤ m_A ≤ 1 TeV. Dark matter relic density and rescaled spin-independent cross section are also taken into account. To obtain the dark matter relic density, we implement the cxSM model interactions in  <cit.> to produce the  <cit.> model file, which are then fed to  <cit.> to calculate. In this paper, a general scan on the free parameters is performed with 0≤ v_s/GeV≤ 150.0, |sinθ| ≤ 0.35, -1000.0^3≤ a_1/GeV^3 ≤ 1000.0^3, 60.0≤ m_A/GeV≤ 1000.0, 300.0≤ m_h_2/GeV≤ 1000.0, and the distribution of the DM relic density are shown in Fig. <ref>. The type of EWPT and its strength in Sec. <ref> are used to classify the spots. The blue spots represent the parameter points that could induce first order phase transition with v_C/T_C>1. The orange spots contend all the other case. Current measurement of cold DM relic density given as Ω_DMh^2 = 0.1186 ± 0.0020 <cit.> is shown as black line. Most of the spots in our general scan are below this line, thus satisfy the DM relic density constraint. There is a minimum at m_A≃ 62.5 GeV as expected where the DM annihilation process mediated by h_1 is highly enhanced, and valleys between m_A≃ 150 GeV and m_A≃ 500 GeV for the increase of the annihilation process mediated by h_2 since the scanning region of m_h_2 is chosen to be from 300 GeV to 1 TeV. Fig. <ref> shows the Feynman diagram of the interaction between the dark matter particle and the proton by exchanging the SM Higgs. Since the SM Higgs is composed by (<ref>), the spin-independent cross section of DM-proton process can be written as σ_SI^[p] =m_p^4/2π v^2(m_p+M_A)^2( g_h_1 AAcosθ/m_h_1^2 - g_h_2 AAsinθ/m_h_2^2)^2 ×( f_u^[p] + f_d^[p] + f_s^[p] + 2/9 f_G^[p])^2, where the f_u^[p] ,  f_d^[p],  f_s^[p] and f_G^[p] are proton form factors <cit.> and the minus sign in the first braket is derived from the minus sign in Eq. (<ref>) with the couplings being g_h_2 AA =√(2)a_1+m_h_2^2 v_s/2 v_s^2cosθ, g_h_1 AA =√(2)a_1+m_h_1^2 v_s/2 v_s^2sinθ. In this work,  <cit.> is also used to calculate the spin-independent cross section. If the DM abundance is less than the observed DM abundance, the rescaled spin-independent cross section σ_SI(rescaled) could be obtained according to σ_SI(rescaled) = σ_SIΩ_cxSMh^2/Ω_DMh^2. The general scan with Eq. (<ref>) is also performed to σ_SI(rescaled) as shown in Fig. <ref>. The definition of color remains the same as that in the figure of dark matter relic density. Experimental constraints from the direct dark matter search experiment XENON1T <cit.> is shown as line, and the expected efficiencies of future experiment XENONnT <cit.> and PandaX-4T <cit.> are shown by the dashed line. Currently we can exclude the dark matter mass between 65 GeV and 120 GeV under the premise of SFOEWPT. Most of the SFOEWPT spots in our scanning space can be covered by XENONnT. Fig. <ref> shows the scaled cross section v.s. singlet-like Higgs mass with the m_A fixed to 62.5 GeV and m_h_2 Varying from 70 GeV to 1 TeV. A minimum is generated when m_h_2=m_h_1 as indicated in Eq. (<ref>). From Fig. <ref>, we can see that very few parameter points for SFOEWPT can survive from the direct dark matter search. On the contary, in the Fig. <ref> most parameter regions with m_A>62.5  GeV that realise SFOEWPT survive the current direct DM search and are able to be tested by XENONnT. Therefore, it is more valuable for DM direct detection to investigate the m_A region beyond 62.5 GeV. A similar study on the DM relic density is presented in the Ref. <cit.>, which, same as this paper, suggests that most of the parameter region satisfying DM relic density, and SFOEWPT conditions survives the Xenon-1T search and can be probed by Xenon-nT and PandaX-4T. Compared with Ref. <cit.>, this paper finds some parameter space that survives Xenon-nT search. We further studies the cases of m_A≃62.5 GeV and m_A>62.5 GeV, and reaches the SFOEWPT parameter region beyond the detection capability of XENONnT. § HEAVY SCALAR RESONANCE SEARCHES BOUNDS AT THE LHC The cxSM predicts that the singlet-like scalar boson h_2 can be produced at the LHC and decay to various standard model particles. Thus, h_2 can behave as a heavy spin-0 resonance in collider when m_h_2>m_h_1. In this section, we investigate the constraints on the cxSM parameter space from the direct heavy resonance search at the LHC. The production cross section times branching fraction of h_2 → WW <cit.>, ZZ <cit.>, hh <cit.>, ττ <cit.> and bb <cit.> are scanned in the parameter space Eq. (<ref>). These calculations rely on the mixing angle θ and the widths of additional decay. Given by Eq. (<ref>), the production cross section of h_2 can be expressed as σ_p p → h_2 = sin^2θ σ_p p → h for each production mode. The decay widths of the existing channels are also obtained by multiplying a factor sin^2θ on the standard model widths as Γ_h_2→ XY=sin^2θΓ_h→ XY^SM. For the additional decay channels, the h_2→ AA decay is considered because of the δ_2 term in Lagrangian, similar to the discussion of h_1 → AA in Sec. <ref>. The h_2 h_1 h_1 vertex also exists with the coupling being g_h_2 h_1 h_1 = sinθcosθ× [3a_1/√(2)v_ssinθ/v_s+(m_h_1^2+m_h_2^2/2)(sinθ/v_s-cosθ/v_0)], due to their mixing. Thus, we must also include the h_2→ h_1 h_1 channel. In addition, the three-body decay channel, h_2 → h_1 A A, is also taken into consideration because of the non-zero coupling of g_h_2 h_1 AA with g_h_2 h_1 AA =1/2v_0v_s^3(√(2)a_1 v_0 sinθcosθ+ m^2_h_2 v_s^2 cos^2θsin^2θ - m_h_1^2 v_s^2 cos^2θsin^2θ + m_h_1^2 v_s v_0 cosθsin^3θ + m_h_2^2 v_0 v_s cos^3θsinθ). Apart from the direct h_2→ h_1 A A decay, an interesting process where one or both of the Higgs boson from di-Higgs decay channel is off shell, leading to one or more pairs of heavy particles (WW, tt̅ etc or a pair of heavy dark matter particles) in the final state, e.g. h_2→ h_1 h_i^∗→ h_1 A A. One nominally expects these contributions to be suppressed due to the off-shell h_1 propagator and additional-particle phase space suppression. We find, however, that the contribution from the h_2→ h_1 h_i^∗→ h_1 AA channel can provide significant discovery potential. The differential cross section of the mediate three-body decay process is calculated according to the Appendix. <ref>, by integrating which we can obtain the width. With these additional decays, the branching ratio for a decay from h_2 to standard model particles can be written as BR(h_2 → XX)=sin^2θ Γ_h_2 → XX/sin^2θ Γ^SM_h + Γ^BSM_h_2, where Γ^BSM_h_2=Γ_h_2 → h_1 h_1+Γ_h_2 → AA + Γ_h_2 → h_1 AA+Γ_h_2 → h_1 tt̅. Finally, the overall cross section in cxSM for heavy resonance search can be simply written as σ_pp→ h_2× BR(h_2 → XX). Fig. <ref> and <ref> shows the experimental constraints from h_2 → WW and ZZ channels for those parameter points satisfying SFOEWPT, where both vector-boson fusion (VBF) and gluon-gluon fusion (ggF) production modes are considered. Fig. <ref> shows the constraint for the same points from ggF+VBF di-Higgs searches combining the results of bb̅bb̅, bb̅γγ and bb̅ττ̅ final states. The black curves in the figures are the experimental upper limit on the overall cross section, above which the parameter points are excluded. The other channels, including h_2 →ττ and h_2 → bb, are found to hardly have exclusion power in the scanned parameter space and thus not shown in the figures. Those spots with heavy h_2 that survive the diboson searches, h_2→ VV, are likely to have lower A mass. This is because BSM branching ratio h_2→ AA (h_2→ h_1 AA) becomes nonzero for m_h_2≥ 2 m_A (m_h_2≥ 2 m_A+m_h_1) and thus reduce the braching ratio of h_2 → VV, making it difficult for experiment to exclude this space via di-boson resonance searches. § PROSPECT OF HEAVY SCALAR SEARCH IN B-JETS+MET CHANNELS When considering the cxSM bb̅+MET signal, we consider a comprehensive set of processes (CSPs) that contribute to this channel. Our search strategy is inspired by strategies used for mono-Higgs plus MET, then optimized to account for other important sub-processes, such as those in which an off-shell h_1 mediates bb̅ pair production. To carry out detailed simulations for the HL-LHC, we select a set of benchmark parameter points after applying all the constraints and requirements discussed in the previous sections. In subsection <ref>, we explore aspects of the underlying sub-processes and allowed parameter space, as it bears on the LHC signal. The selection criteria and the signal signature are shown in subsection <ref>. Finally, we find that the discovery potential with a significance of ≥ 1.96σ reach for the bb̅+MET channel is significant at the HL-LHC, and most parameter points will be covered in that case. §.§ The complete set of cxSM processes for b-jets plus MET In the cxSM, multiple processes contribute to the bb̅+MET final state, including the di-Higgs channels, heavy Higgs boson direct decay channels and mono-Higgs plus b-jets. The DM candidate can be produced from direct four-particle vertex from heavy Higgs boson h_2 or from the subsequent decay of an on-shell or off-shell h_1,2 boson. We consider all the processes with the coupling order satisfying QCD≤ 2 and QED≤ 4 in MadGraph <cit.>. The CSPs have more than one hundred diagrams. A brief overview of the main types is illustrated in Fig. <ref>, among which the cross section is dominated by the diagram-(a) and diagram-(b), in particular, the diagram-(b) with mediator substituted by an off-shell h_1 are found to be significant. Previous studies on the collider searches of the cxSM include: * The h_1→ A A case with m_A=62.5 GeV <cit.>, which satisfies the Higgs invisible decay constraint and obtains a relatively large parameter space. * The degenerate-scalar scenario with |m_h_2-m_h_1|≲𝒪(1) GeV <cit.>. Collider signatures in this scenario is SM-like, and therefore current experimental data cannot distinguish them from the SM predictions. However, the on-shell h_1→ A A decay with m_A= 62.5 GeV is not expected to significantly enhance the sensitivity of bb̅+MET search because the branching ratio is already highly bounded by the Higgs invisible decay constraint. Moreover, with m_A= 62.5 GeV, we find that the parameter space is tightly constrained by the current experimental requirements. Therefore, in this study, we investigate the most general case where m_A ≥ 62.5 GeV. However, due to the exclusion of all points with m_A in the range of [62.5 GeV, 120 GeV] by XENON1T, as mentioned in Sec. <ref>, we further restrict our analysis to m_A ≥ 120 GeV. To choose benchmark mass points for analysis, we impose a requirement that m_h_2 > m_h_1 + 2 × m_A. This condition ensures that the h_2 mediator in diagram-(a) and diagram-(b) can be on-shell, and thus enhances the cross section of CSPs signal. Therefore, the analysis will be conducted on the following ten mass points: Taking into account all the current constraints and requirements discussed in the previous sections, it is not possible to find a shared benchmark point for the remaining parameters (a_1, v_s, sinθ) that satisfies all the mass points. For instance, The SFOEWPT tends to choose a larger -a_1 when m_h_2 becomes heavier. The relationship between m_h_2 and a_1 is depicted in Fig. <ref>, from which it is evident that there is no single choice for a_1 that can be used for the mass range between m_h_2=400 GeV and m_h_2=1000 GeV. This a_1-m_h_2 correlation leads to an increase in the cross section of certain processes in CSPs. Specifically, the process pp→ h_1^*→ h_1AA with diagram-7(b) is found to be reinforced and even becomes the dominant process for heavy h_2 masses. Its cross section is proportional to g_h_1h_1AA, which can be expressed as g_h_1 h_1 AA =1/2v_0v_s^2(m_h_1^2 v_s^2 cos^5θ + m_h_1^2 v_s^2 cos^3θsin^2θ + √(2)a_1 v_0 sin^3θ + m_h_1^2 v_0 v_s cos^2θsin^2θ + m^2_h_1 v_0 v_s sin^5θ). From the formula, it can be observed that this coupling becomes larger with increasing values of -a_1 since the sinθ is negative due to the heavy scalar requirement as discussed in Sec. <ref>. The resulting correlation between m_h_2 and g_h_1h_1AA is shown in Fig. <ref>. To justify the other parameter ranges, specifically v_s, we must consider the collider constraints on the lower m_h_2 mass region. For a larger v_s ≫ 100 GeV, the p p → h_2 → h_1 h_1 becomes dominant in the low m_h_2 region. However, the major process obtains a large cross section ∼𝒪(10^-1)-𝒪(1) pb around 400 GeV depending on the mixing angle and thus are excluded by the correct LHC bound, see in Fig. <ref>. The range of the mixing angle, sinθ, is generally constrained by the EWPO given in Fig. <ref>. The correlations between m_h_2 and the other parameters, namely v_s, sinθ, and m_A, are also depicted in Fig. <ref>. Among these correlations, the decrease in v_s can be attributed to the enhancement of process h_2 → h_1 A A. Moreover, as the mass of h_2 increases, it is expected that the coupling angle sinθ approaches 0, but those points too close to 0 are rejected based on the SFOEWPT requirement. The detailed discussion regarding the correlation with m_A can be found in Sec.<ref>. §.§ Analysis and results In this subsection, we will describe the simulation procedures used to select the signals of b-jets plus MET at the HL-LHC. Monte Carlo samples are generated for both the CSPs signal and background events at a pp collider with a center-of-mass energy of 14 TeV. These samples are then normalized to the integrated luminosity of the HL-LHC, which is set to 3000 fb^-1. We performed a detailed simulation for the mass points listed in Table <ref>. The remaining three parameters corresponding to each mass point were chosen randomly within the allowed parameter space. However, these parameters are believed to affect the relative contributions of different diagrams in CSPs, thereby impacting the selection efficiency. It is important to note that the exclusion reach in the m_h_2-m_A plane obtained from our search is intended to be general. Hence, variables that could potentially provide discrimination power between the most dominant diagrams, such as the angular separation of the bb̅ system and missing transverse momentum, were not considered in this analysis. The signal Monte Carlo (MC) samples are generated at the leading order using MadGraph5_aMC@NLO <cit.> with UFO and parameter relationships implemented by the FeynRules <cit.>. The events are then processed through Pythia8 <cit.> for parton showering and hadronization. Finally, the simulated events are passed through Delphes3 <cit.> to account for the detector response. Associated background processes from top quark pair production (ttbar), single top quark production (single-top), Vh production, diboson production, and processes involving a vector boson in association with jets (V+jets) are generated using Pythia8 <cit.>. The aim is to simulate backgrounds that have similar visible final states as our target signal and can contaminate into the signal region. Therefore, all background events are required to have at most one lepton and at least one bottom quark. Additionally, they must have at least one neutrino to satisfy the requirement of high missing transverse energy. Table <ref> provides a summary of the background generation process. The showering and simulation approach for background events follows the same procedure as for the signal. The generated Monte Carlo samples are analyzed using MadAnalysis5 <cit.>. During the object reconstruction stage, some basic requirements on transverse momentum and pseudorapidity are applied. Specifically, jets are required to have p_t > 25 GeV and |η| < 2.5, while electrons and muons are required to have p_t > 10 GeV and |η| < 2.4. These requirements help ensure the quality and reliability of the reconstructed objects in the analysis. Two general cuts are initially applied to distinguish the signal and background events for all mass points: * Cut-1 n_lepton=0. * Cut-2 n_b-jets=2. After applying these cuts, we present the distribution of the invariant mass of the bb̅ system in Fig. <ref>. In this figure, the signal events have been rescaled to match the remaining background events. To identify the bottom-quark pair from the SM-like Higgs boson decay, we implement a related cut: * Cut-3: 100 GeV < m_bb̅ < 140 GeV. Furthermore, we take into account the missing transverse energy to further distinguish signal events from the background. As depicted in Fig. <ref>, this variable is expected to be significantly large in our signal samples. To ensure that the statistical uncertainty of the generated background does not have a substantial impact, we apply a relatively loose cut: * Cut-4: MET > 350 GeV The purpose of this cut is to ensure that the signal events can be effectively separated from the background. After applying the selection criteria, the number of the signal events that can be detected at a 95% confidence level corresponds to a cross section close to 10^-2 pb. The exact exclusion cross sections are listed in Table <ref>. From the table, we observe that the selection efficiency is primarily dependent on the mass of the dark matter candidate A, as one can expect from the MET cut. To cover the entire 300 GeV < m_h_2 < 1000 GeV range for each m_A point, we employ the linear interpolation and extrapolation based on the limits obtained from our analysis. In particular for the case of m_A=430 GeV, we make the assumption that the limit remains constant throughout the entire range. Notice that the expected discovery ability is enhanced in m_h_2=400 GeV for the case of m_A=130 GeV, therefore, our approach to obtaining the upper limit in the low m_h_2 region can be considered conservative, the actual exclusion limit in that region might be even stronger than what is indicated by our study. Subsequently, we employ a bivariate spline approximation based on this rectangular mesh to obtain a wide range of upper limits on the m_h_2-m_A plane. We then scatter the parameter points from our general scanning space (Eq.<ref>) on the this two-dimensional plane, taking into account all the current experimental constraints. The resulting plot is shown in Fig. <ref>, where the color represents the comparison of the cross section with respect to the upper limits obtained from our study. Parameter points that are colored orange indicate a deviation of 1.96σ, while those marked by red spots indicate a deviation of 5σ, allowing for the discovery or exclusion of those specific parameter points. Several observations are in order: Firstly, in Fig. <ref>, there is a distinct line with a positive slope at the upper boundary of the mass points region, particularly noticeable for heavier h_2 values. This slope corresponds to the relationship m_h_2=2m_A. The region located above this slope is largely excluded based on the results of the heavy scalar resonance search, as discussed in Section <ref>. Secondly, the density of red spots is more pronounced in the region of heavier h_2 masses, suggesting a more promising discovery potential in the higher h_2 mass range. At first glance, this result may seem counter-intuitive. However, it can be attributed to the increasing cross section of the pp→ h_1^*→ h_1AA process, as discussed in Subsection <ref>. Finally, a significant portion of the parameter space with heavier h_2 masses can be effectively probed by the bb̅+MET search at the HL-LHC. This even indicates that there is already notable discovery potential for some regions of the parameter space if we were to migrate this analysis to the current LHC. § CONCLUSION Through spontaneous and soft breaking of a global U(1) symmetry, the cxSM introduces two additional degrees of freedom, with one catalyzing a possible SFOEWPT and the other providing a viable DM candidate. Previous studies have demonstrated the viability of the cxSM for both DM and SFOEWPT and have elucidated the correlation between the singlet scalar-SM Higgs coupling and the occurrence of a SFOEWPT in the cxSM parameter space. In addition, there exists a coupling h_1 A A between the SM-like Higgs and pseudscalar (DM) pair. For sufficiently light A, the Higgs invisible decay is induced for small pseudoscalar masses. To avoid the an experimentally excluded excess of the the Higgs invisible decay, one way is to restrict m_A to a narrow window around m_h_2/2 or to implement a delicate fine tuning cancellation between a_1 and v_s. Alertnately, one may take m_A>m_h_2/2 so that the Higgs invisible decay is impossible. In both cases, a distinctive signal in pp collisions is a bb̅ pair plus MET, with various contributions being mediated by on- and/or off-shell h_1,2 bosons. Searches for such signal processes have never been performed for the cxSM. Therefore, there exists strong motivation to study the HL-LHC reach for the cxSM EWPT-DM viable parameter space. In this work, we have performed a detailed analysis of this reach. Compared with the previously considered most relevant heavy resanance searches at the LHC, which include the di-Higgs channel (see in Fig. <ref>) and the WW + ZZ channel (see in Fig. <ref> and Fig. <ref>), that realize the capacity to probe the heavy resanance production up to about 𝒪(10) pb, the present analysis via bb̅+MET channel improves the ability significantly, see in Tab. <ref>. We find that a significant portion of the viable parameter space can be discovered/excluded by the bb̅+MET search. While we considered a complete set of processes with bb+MET final states, we designed the detection method based on the characteristics of the heavy scalar resonance signal events. We find that one of the dominant processes, p p → h^*_1 → h_1 A A that is induced by the coupling g_h_1 h_1 A A, is reinforced significantly by the increasing -a_1 in heavy m_h_2 region. The selection is more likely to detect heavier h_2 with larger |a_1|. We further find that one can probe the EWPT-DM viable cxSM for a heavy scalar mass up to ∼ 1 TeV. § ACKNOWLEDGEMENT M.J. Ramsey-Musolf and W. Zhang were supported in part by the National Natural Science Foundation of China under grant no. 11975150 and by the Ministry of Science and Technology of China under grant no. WQ20183100522. M. J. Ramsey-Musolf also gratefully acknowledges support under the Double First Class Plan of the Shanghai Jiao Tong University and sponsorship from Shanghai Tang Junyuan Education Foundation. Y. Cai received financial support from the China Scholarships Council program. L. Zhang's work was supported by the National Science Fund of China for Excellent Young Scholars under grant number 12122507. § OBLIQUE PARAMETER Following the notation by Peskin and Takeuchi <cit.>, the contribution to S, T and U from the new scalar can be expressed as <cit.> Δ S = 1/π|sinθ|^2{B_0(0,m_h_2,M_Z)-B_0(M_Z,m_h_2,M_Z)    + 1/M_Z^2[B_22(M_Z,m_h_2,M_Z)-B_22(0,m_h_2,M_Z)] }, Δ T = 1/4π s_w^2|sinθ|^2{ -B_0(0,m_h_2,M_W)+1/c_w^2B_0(0,m_h_2,M_Z)    + 1/M_W^2[B_22(0,m_h_2,M_W)-B_22(0,m_h_2,M_Z)] }, Δ (U+S) = 1/π|sinθ|^2{ B_0(0,m_h_2,M_W)-B_0(M_W,m_h_2,M_W)    + 1/M_W^2[-B_22(0,m_h_2,M_W)+B_22(M_W,m_h_2,M_W)] }, where B_0 and B_22 are Passarino-Veltman funtions <cit.>. § THREE-BODY DECAY PHASE SPACE We use the example of the "three-body" decay process, where the differential cross section for a process of a three-body decay is dΓ=(2π)^4/2m_h_2|ℳ|^2 dΦ_3, where we use the dΦ_n to denote the n-body phase space. Since the standard form of the phase space volume element with n final state particles can be decomposed into a number of multiplication of 2-body phase space, with the dΦ_3 is related with according to dΦ_3 = dΦ_2(m_AA,m_A,m_A) dΦ_2(m_h_2,m_AA,m_h_1)(2π)^2 dm_AA^2 = dΩ^* |p^*|/(2π)^6 4m_AA dΩ_3 |p_3|/(2π)^6 4M (2π)^2 dm_AA^2, where the Ω^* and Ω_3 are the solid angles of the off-shell SM-like Higgs and heavy resanance respectively. The integration parameter, m_AA, is the invariant mass of two-DM system. The integration range is [2m_A, m_h_2-m_h_1]. p^* (p_3) is the momentum of off-shell (on-shell) SM-like Higgs momentum. Thus the differential cross section can be expressed as dΓ = (2π)^5/16M^2|ℳ|^2 dm_AA dΩ^* dΩ_3 = λ^1/2(m_AA, m_A, m_A) λ^1/2(m_h_2,m_AA,m_h_1)/32π^3 m_h_2^2    × |g_211g_1AA/m_AA^2|^2  dm_AA. where we have used |ℳ|^2 = |g_211g_1AA/m_AA^2|^2 and λ^1/2(m_12, m_1, m_2) = √([m_12^2-(m_1^2+m_2^2)]^2-4m_1^2m_2^2)/2m_12. Based on these relations, we calculate both "2-body" and "3-body" branching ratios and scan over the general parameter space via BR(h_2 → h_1 A A)=Γ_h_2 → h_1 A A/sin^2θ Γ^SM_h + Γ_h_2 → A A + Γ_h_2 → h_1 A A for "3-body" case, and the "2-body" case has a similar form. § ADDITIONAL CONTENT FOR HL-LHC SEARCH The cross sections of parameter points surviving all of the current experimental constraints are shown in Fig. <ref> for m_A around 130GeV, 230GeV, 330GeV and 430GeV respectively. Notice that the cross section increases as m_h_2 increases. The reason is that the coupling g_11AA grows with m_h_2. Hence the cross section of the dominant process pp→ h_1^* → h_1 A A increases. The dashed line and the solid line in each sub-figure represent the 5 σ and 1.96 σ discovery significance in the HL-LHC via our analysis for the corresponding m_A values of 130 GeV, 230 GeV, 330 GeV, and 430 GeV. tocsectionReferences utphys
http://arxiv.org/abs/2307.01469v1
20230704041331
X-ray/H$α$ scaling relationships in stellar flares
[ "Hiroki Kawai", "Yohko Tsuboi", "Wataru B. Iwakiri", "Yoshitomo Maeda", "Satoru Katsuda", "Ryo Sasaki", "Junya Kohara", "MAXI TEAM" ]
astro-ph.HE
[ "astro-ph.HE", "astro-ph.SR" ]
http://arxiv.org/abs/2307.00476v1
20230702051206
Pricing European Options with Google AutoML, TensorFlow, and XGBoost
[ "Juan Esteban Berger" ]
q-fin.PR
[ "q-fin.PR", "cs.LG" ]
Query-Efficient Decision-based Black-Box Patch Attack Zhaoyu Chen, Bo Li, Shuang Wu, Shouhong Ding, Wenqiang Zhang Corresponding authors are Bo Li and Wenqiang Zhang. Zhaoyu Chen and Wenqiang Zhang are with Academy for Engineering and Technology, Fudan University, Shanghai, China, and also with Yiwu Research Institute of Fudan University, Yiwu, China. The emails of these authors are: zhaoyuchen20@fudan.edu.cn, njumagiclibo@gmail.com, wqzhang@fudan.edu.cn. =================================================================================================================================================================================================================================================================================================================================================================================================================================== Researchers have been using Neural Networks and other related machine-learning techniques to price options since the early 1990s. After three decades of improvements in machine learning techniques, computational processing power, cloud computing, and data availability, this paper is able to provide a comparison of using Google Cloud's AutoML Regressor, TensorFlow Neural Networks, and XGBoost Gradient Boosting Decision Trees for pricing European Options. All three types of models were able to outperform the Black Scholes Model in terms of mean absolute error. These results showcase the potential of using historical data from an option's underlying asset for pricing European options, especially when using machine learning algorithms that learn complex patterns that traditional parametric models do not take into account. § INTRODUCTION The XGBoost algorithm has been regarded as the gold standard for machine learning since its inception in 2014. Neural Networks have also showcased incredible abilities in learning extremely complicated patterns in data with large numbers of input variables. Most recently, however, machine learning practitioners have praised Google Cloud's AutoML models for their ease of use and incredible accuracy. This study hopes to discover if TensorFlow deep learning models, XGBoost gradient boosted decision trees, and Google Cloud's AutoML Regressor can outperform the Black Scholes model. § RELATED WORK In 2019, Stanford University students Yang and Ke <cit.> studied long-short term memory neural networks and feed-forward neural networks (referred to as multilayer perceptrons in their paper) for the purpose of pricing options. These students used data sourced from OptionMetrics' IvyDB US dataset <cit.>, the same data source used for training the models in this paper. Yang and Ke decided not to use the implied volatility listed in the IvyDB US dataset. They instead used the past 20 lags in the underlying asset's closing price to train their long short term memory neural networks with the hopes that their models will be able to price options without using implied volatility as an input. However, unlike in Yang and Ke's paper, no recurrent neural networks were used in this study, but instead, the past 20 daily closing prices for the underlying asset were used as individual features (along with an options' strike price, current underlying price, risk-free rate, dividend yield, and whether the option was a call or a put) to train the feed-forward neural networks used in this study. Finally, Yang and Ke assessed the performance of their models according to the mean squared error and mean absolute percentage error (among other performance metrics). The results of Yang and Ke's study show how both standard feed-forward neural networks (i.e., multilayer perceptrons) and long short term memory neural networks are able to outperform the Black-Scholes Model in both mean squared error and mean absolute percentage error. Furthermore, Neural Networks have been studied for the purposes of pricing options since the early 1990s. In 1993, Malliaris and Salchenberger <cit.> implemented Neural Networks that used an option's underlying price, strike price, time to expiration, implied volatility, risk-free rate, and past lags of both the option's price and the underlying asset's price to predict an option's current price. Even as early as 1993, the results of that study showed how Neural Networks could outperform the Black-Scholes model for both in-the-money and out-of-the-money options. Another interesting study was performed by le Roux and du Toit <cit.>. In this study, an option's underlying price, strike price, time to maturity, implied volatility, and risk-free rate are used to estimate the price of an option. This study showed that Neural Networks were able to emulate the Black-Scholes model with an accuracy of at least 99.5% with a confidence interval of 96%. These results are remarkable, considering the relatively simple neural network architectures and the lack of computing power in the early 1990s and 2000s. More modern studies have been performed focused on pricing options with Neural Networks like those by Mitra in 2012 <cit.>, and by Can and Fadda in 2014 <cit.>. The study by Mitra used an option's underlying asset's price, an option's strike price, the time to maturity, the historical volatility of the underlying asset's price, and the risk-free rate to predict an option's price. It showed that neural networks could be used to improve theoretical option pricing approaches since they can learn features that are hard to incorporate into the classical approaches. Furthermore, Can and Fadda used a slightly different approach to pricing options by using an option's underlying price divided by its strike price as one of the features along with the option's underlying price, the time to maturity, and the risk-free rate to estimate the value of the option's price divided by the strike price. Once again, Neural Networks are shown to outperform the Black-Scholes model in terms of Mean Absolute Error. Finally, a highly thorough literature review was performed in 2020 by Ruf and Wang <cit.>, where they summarized the methods and findings of using Neural Networks for Options Pricing and Hedging from the 1990s up until modern findings from 2019. § DATASET The data for this study was sourced from the OptionMetrics' IvyDB US dataset. OptionMetrics is a financial research and consulting firm that provides historical options data and analytics on global exchange-traded options. It is a subsidiary of Wharton Research Data Services. The IvyDB US dataset includes many tables with the historical market, options, and securities data ranging from 1990 to 2021 (as of this writing). The final dataset used to train the models consisted of 10,750,886 observations. The target variable was the midpoint price (which was calculated as the average between the bid and ask price for a given function. The feature variables were the option's strike price, implied volatility, the zero coupon rate, the index dividend yields, the option type (either call or put), the time to maturity in years, and the underlying assets' current price. The underlying asset's past 20 days' closing prices were also included as 20 additional features. The dataset was filtered so only European Options had indexes as underlying assets. Furthermore, only options with midpoint prices less than 100,000 were selected with the goal of eliminating extreme outliers. Furthermore, big data technologies had to be used for querying and cleaning the data since the original dataset was over 500GB large. To efficiently query the data, BigQuery was utilized, and the data was cleaned using Google SQL. Finally, the data was split into training, validation, and testing datasets with approximate splits of 98% of the data being used for training, 1% of the data being used for validation, and 1% being used for testing. § MODELS Six models were implemented with Python for pricing options. These models were the Black-Scholes Model, a three-Layer Feed-Forward Neural Network, a five-Layer feed-forward Neural Network, a gradient-boosted decision tree with a max depth of five, a Gradient Boosted Decision Tree with a Max Depth of ten, and a Google Cloud AutoML Regressor. The feed-forward neural networks were implemented with the Tensorflow Framework and trained on Google Colab's TPU's (which had eight devices available). The gradient-boosted decision tree models were implemented using the XGBoost Framework and trained on Google Colab's A100 GPU. The Google AutoML Model was trained in Google Cloud's Vertex AI platform. §.§ Black-Scholes Model The options used for this study were all European options and can be priced according to the Black-Scholes Model: C = Se^-qTN(d_1)-Ke^-rTN(d_2) P = Ke^-rTN(-d_2)-Se^-qTN(-d_1) where d_1 = [ln(S/K) + (r - q + 12σ^2)T] / σ√(T), d_2 = d_1 - σ√(T). In the Black-Scholes model, C is the price of a call option, P is the price of a put option, S is the current underlying security price, K is the strike price of the option, T is the time in years remaining to an options' expiration date, r is the continuously-compounded interest rate, q is the continuously-compounded annualized dividend yield, and sigma is the implied volatility. This is the only model in this study that used implied volatility as one of the inputs and the only model in this study that does not use any past lags of the underlying security's closing price as inputs. §.§ Feed-Forward Neural Network Models This study implemented two different Feed-Forward Neural Networks. The first one was a three-layer Feed-Forward Neural Network, and the second was a three-layer Feed-Forward Neural Network. All the neural network models were trained using the Keras framework with the TensorFlow backend. The models were trained on a dataset that has been preprocessed by separating the Bid-Ask Midpoint Price column from the rest of the dataset and splitting the data into training, validation, and test sets. Unlike the Black-Scholes Model, the implied volatility column was dropped, and the past 20 lags were added as individual feature variables with the hope that the Neural Networks would be able to learn the necessary features to predict the option's midpoint price. The other feature variables were the underlying securities' price, the option's strike price, the time to maturity, the risk-free rate, the underlying index's dividend yield, and a binary variable indicating whether an option is a call or a put. The training processes use the Adam optimizer with an adaptive learning rate that starts at 0.01 and decreases by a factor of 0.1 every 10 epochs that it doesn't see an increase in performance until it reaches a learning rate of 1 × 10^-6, and an early stopping callback if there isn't an improvement in performance in the last 150 epochs. The models are trained on Google Cloud's TPUs (Tensor Processing Units) with eight available devices for accelerated training. The three-layer Feed-Forward Neural Network's first hidden layer has 256 neurons with the rectified linear unit (ReLU) activation function. The second hidden layer has 128 neurons with ReLU activation function, and the output layer has a single neuron with the linear activation function. The code for the three-layer feed-forward neural network written is shown below: [style=pythoncode, caption=3 Layer Feed-Forward Neural Network, label=lst:model_2] model = Sequential([ Dense(256, input_dim=X_train.shape[1], activation='relu'), Dense(128, activation='relu'), Dense(1, activation='linear') ]) The five-layer Feed-Forward Neural Network's first hidden layer has 256 neurons with the rectified linear unit (ReLU) activation function. The second hidden layer has 128 neurons with ReLU activation function. The third hidden layer has 64 neurons with ReLU activation function. The fourth layer has 32 neurons with ReLU activation function, and the output layer has a single neuron with the linear activation function. The for five-layer feed-forward neural network written in Python can be seen below: [style=pythoncode, caption=5 Layer Feed-Forward Neural Network, label=lst:model_4] model = Sequential([ Dense(256, input_dim=X_train.shape[1], activation='relu'), Dense(128, activation='relu'), Dense(64, activation='relu'), Dense(32, activation='relu'), Dense(1, activation='linear') ]) §.§ Gradient Boosted Decision Tree Models The next two models were implemented with XGBoost, a gradient boosting algorithm, for predicting the Bid-Ask Midpoint Price of the options in the dataset. The models were trained on a dataset that has been preprocessed by separating the Bid-Ask Midpoint Price column from the rest of the dataset and splitting the data into training, validation, and test sets. Just like the Neural Network models, the implied volatility column was dropped, and the past 20 lags were added as individual feature variables with the hope that the Neural Networks would be able to learn the necessary features to predict the option's midpoint price. The other feature variables were the underlying securities' price, the option's strike price, the time to maturity, the risk-free rate, the underlying index's dividend yield, and a binary variable indicating whether an option is a call or a put. In order to use an adaptive learning rate, the custom learning rate function was implemented so that the learning rate would decrease as the boosting rounds would advance for the optimization steps to become gradually smaller as shown below: [style=pythoncode, caption=Customized Learning Rate Scheduler, label=lst:eta_decay] def eta_decay(iteration): max_iter = 100000 x = iteration + 1 eta_base = 0.5 eta_min = 0.2 eta_decay = eta_min + (eta_base - eta_min) * np.exp(-(x/8)**2 / max_iter) return eta_decay The XGBoost Framework was used to train two models, one with a maximum depth of five and the other with a maximum depth XGBoost Model with a maximum depth of ten. The code used to train the XGBoost Model with a maximum depth of five written in Python can be seen below: [style=pythoncode, caption=XGBoost Model with Max Depth of 5, label=lst:model_5] max_iter = 40000 eta_decay = np.array( [eta_decay(iteration) for iteration in range(max_iter)]) PARAMS = 'booster': 'gbtree', 'eval_metric': 'mae', 'max_depth': 5, 'tree_method': 'gpu_hist' evals_result = 'train': dtrain, 'validation': dval progress1 = dict() model = xgb.train( maximize=True, params=PARAMS, dtrain=dtrain, num_boost_round=max_iter, early_stopping_rounds=max_iter, evals=[ (dtrain, 'train'),(dtest, 'test')], evals_result=progress1, verbose_eval=1, callbacks=[ xgb.callback.LearningRateScheduler( lambda iteration: eta_decay[iteration])]) The code used to train the XGBoost Model with a maximum depth of five written in Python is exactly the same as the code used to train the XGBoost Model with a maximum depth of ten except for the fact that the max depth was set to ten as shown below: [style=pythoncode, caption = XGBoost Parameters for Max Depth of 10, label=lst:model_6] PARAMS = 'booster': 'gbtree', 'eval_metric': 'mae', 'max_depth': 10, 'tree_method': 'gpu_hist' §.§ Google Cloud AutoML Regressor The final model in this study was trained on Google Cloud's Vertex AI Platform. The model chosen was the AutoML Regressor. The model was given a budget of 72 node hours, but early stopping was implemented, and the model was only trained for two days and 27 minutes. The model's objective was tabular regression, and it was optimized for minimizing mean absolute error. The models in this study were trained on the A100 GPUs provided by Google Colab. The use of GPUs significantly accelerated the training process and allowed for experimentation with more complex models. The use of these GPUs allowed for faster model training and tuning, enabling the exploration of a larger range of model architectures and hyperparameters. § RESULTS The testing results for all the models are summarized in the table below. All of the models in this study were able to outperform the Black-Scholes model in terms of mean absolute error, which was the metric on which the trainable models were trained to minimize. The most accurate model was the XGBoost model with a max depth of ten. This model was ten times more accurate than the Black-Scholes model. Surprisingly, only two models were able to surpass the Black-Scholes model in terms of mean absolute percentage error and those were the XGBoost model with a max depth of ten and the Google AutoML Regressor. When comparing the two best-performing models, it is important to note that the AutoML Regressor took over two days to complete its training (with early-stopping enabled) while the XGBoost model with a max depth of ten was trained in a little over 30 minutes. Nevertheless, the AutoML regressor takes almost no expertise to train since all of the hyper-parameter tuning is done automatically. In order for the XGBoost model to beat the AutoML Regressor domain expertise is required as showcased by the use of custom learning-rate schedulers. It is important to note that the range of option prices used in this study range from 0.01 to 100,000. Extreme values, especially from out-of-the-money options, may lead to a skewed measure of mean absolute percentage error. Furthermore, all the trainable models were trained to optimize mean absolute error, not mean absolute percentage error. Given that the XGBoost models were able to outperform both TensorFlow and Google's AutoML Regressor, it makes sense why so many machine learning engineers and data scientists use this as their model of choice. All of the machine learning models were able to outperform the Black-Scholes model, but it is clear that the best performance in this study was achieved by the XGBoost models. The XGBoost with a max depth of ten had the lowest mean absolute error, mean absolute percentage error, and it was trained in a fraction of the time than its closest competitor (Google's Auto ML Regressor). It is also important to note that none of the machine learning models were given implied volatility as a feature like the Black-Scholes model. Instead, all the models learned all the necessary features on their own from the other feature variables and the past 20 lags of the underlying securities' closing prices. With high volumes of data and computing resources becoming highly available, using machine learning and deep learning methods for options pricing becomes a viable option for pricing securities. In further studies, it would be interesting to compare machine learning and deep learning methodologies with other option pricing methods such as Monte Carlo Simulations or the Binomial Asset Pricing Model in order to see if the Machine Learning models are able to outperform those pricing methodologies as well. Nevertheless, the results of this study seem to be consistent with the results of other related studies in that machine learning methods are able to outperform the Black-Scholes model, especially in the models created using XGBoost. § CONCLUSION Given that the XGBoost models were able to outperform both TensorFlow and Google's AutoML Regressor, it makes sense why so many machine learning engineers and data scientists use this as their model of choice. All of the machine learning models were able to outperform the Black-Scholes model, but it is clear that the best performance in this study was achieved by the XGBoost models. The XGBoost with a max depth of ten had the lowest mean absolute error, mean absolute percentage error, and it was trained in a fraction of the time than its closest competitor (Google's Auto ML Regressor). It is also important to note that none of the machine learning models were given implied volatility as a feature like the Black-Scholes model. Instead, all the models learned all the necessary features on their own from the other feature variables and the past 20 lags of the underlying securities' closing prices. With high volumes of data and computing resources becoming highly available, using machine learning and deep learning methods for options pricing becomes a viable option for pricing securities. In further studies, it would be interesting to compare machine learning and deep learning methodologies with other option pricing methods such as Monte Carlo Simulations or the Binomial Asset Pricing Model in order to see if the Machine Learning models are able to outperform those pricing methodologies as well. Nevertheless, the results of this study seem to be consistent with the results of other related studies in that machine learning methods are able to outperform the Black-Scholes model, especially in the models created using XGBoost. § ACKNOWLEDGMENTS This research project contains components for three graduate-level courses at the University of Notre Dame, which are ACMS 80695 - Master's Research Project taught by Prof. Guosheng Fu, CSE 60868 - Neural Networks taught by Prof. Adam Czajka, and ACMS 60890 - Statistical Foundations of Data Science taught by Prof. Xiufan Yu. Special thanks are given to all these professors for their teaching and support during the Spring Semester of 2023. The complete GitHub repository for the code in this project can be accessed through the following URL: <https://github.com/juan-esteban-berger/Options_Pricing_AutoML_TensorFlow_XGBoost/>. The most accurate model can be accessed through Hugging Face: <https://huggingface.co/juan-esteban-berger/XGBoost_European_Options_Pricing_MD_10>. 9 ke2019option Ke, Alexander and Yang, Andrew. Option Pricing with Deep Learning. CS230: Deep Learning, Fall 2019, Stanford University, CA, 2019. malliaris1993beating Malliaris, A.G. and Salchenberger, L.M. Beating the Best: A Neural Network Challenges the Black-Scholes Formula. In Proceedings of 9th IEEE Conference on Artificial Intelligence for Applications, 1993, pp. 445–449. leRoux2001 le Roux, L.J. and du Toit, G.S. Emulating the Black & Scholes Model with a Neural Network. Southern African Business Review, vol. 5, no. 1, 2001, pp. 54–57. mitra2012option Mitra, S.K. An Option Pricing Model that Combines Neural Network Approach and Black-Scholes Formula. Global Journal of Computer Science and Technology, vol. 12, no. 4, 2012, pp. 1–9. can2014nonparametric Can, M. and Fadda, S. A Nonparametric Approach to Pricing Options using Learning Networks. Southeast Europe Journal of Soft Computing, vol. 3, no. 1, 2014. ruf2020neural Ruf, Johannes and Wang, Weiguan. Neural Networks for Option Pricing and Hedging: A Literature Review. Journal of Computational Finance, vol. 24, no. 4, 2020, pp. 91–120. arXiv:1911.05620. optionmetrics Wharton Research Data Services. IvyDB US by OptionMetrics (optionm_all). <https://wrds.wharton.upenn.edu/>, 2022. Accessed: 2023-04-25. § APPENDIX 2 2 2 2
http://arxiv.org/abs/2307.03216v1
20230706175130
Increased motility impedes clustering
[ "Chandraniva Guha Ray", "Indranil Mukherjee", "P. K. Mohanty" ]
cond-mat.stat-mech
[ "cond-mat.stat-mech" ]
same
http://arxiv.org/abs/2307.02028v1
20230705052459
EHRSHOT: An EHR Benchmark for Few-Shot Evaluation of Foundation Models
[ "Michael Wornow", "Rahul Thapa", "Ethan Steinberg", "Jason Fries", "Nigam Shah" ]
cs.LG
[ "cs.LG", "cs.AI", "cs.CL" ]
Topological classes of thermodynamics of the four-dimensional static accelerating black holes Di Wu August 1, 2023 ============================================================================================= While the general machine learning (ML) community has benefited from public datasets, tasks, and models, the progress of ML in healthcare has been hampered by a lack of such shared assets. The success of foundation models creates new challenges for healthcare ML by requiring access to shared pretrained models to validate performance benefits. We help address these challenges through three contributions. First, we publish a new dataset, EHRSHOT, containing de-identified structured data from the electronic health records (EHRs) of 6,712 patients from Stanford Medicine. Unlike MIMIC-III/IV and other popular EHR datasets, EHRSHOT is longitudinal and not restricted to ICU/ED patients. Second, we publish the weights of a 141M parameter clinical foundation model pretrained on the structured EHR data of 2.57M patients. We are one of the first to fully release such a model for coded EHR data; in contrast, most prior models released for clinical data (e.g. GatorTron, ClinicalBERT) only work with unstructured text and cannot process the rich, structured data within an EHR. We provide an end-to-end pipeline for the community to validate and build upon its performance. Third, we define 15 few-shot clinical prediction tasks, enabling evaluation of foundation models on benefits such as sample efficiency and task adaption. The code to reproduce our results, as well as the model and dataset (via a research data use agreement), https://github.com/som-shahlab/ehrshot-benchmarkare available at our Github repo. § INTRODUCTION Open datasets, code, and models have been essential in advancing machine learning (ML) over the past decade <cit.>. Though the benefits of open code and data are well known <cit.>, there is currently a dearth of publicly available datasets and pretrained models for electronic health records (EHRs), which makes conducting reproducible research challenging <cit.>. This has proven problematic in the era of foundation models (FMs), which hold tremendous promise for clinical applications <cit.>. The ability of a shared FM to generalize across health systems would be highly valuable, as most hospitals lack the computational resources to train such models <cit.>. Yet many of the purported benefits of clinical FMs, such as sample efficiency and task adaptability, remain difficult to evaluate due to reproducibility and data access issues <cit.>. Unfortunately, most existing EHR datasets (e.g., MIMIC-III/IV <cit.>, eICU <cit.>, AmsterdamUMCdb <cit.>, and HiRID <cit.>) narrowly focus on the intensive care unit (ICU), which provides a limited snapshot of a patient's overall health trajectory and limits what tasks can be evaluated <cit.>. Access to a patient's complete medical timeline, referred to as "longitudinal" data, offers a more realistic representation of the breadth of patient information available to a health system. Longitudinal EHR data, however, remains scarce. The few public datasets that exist, such as the CPRD <cit.> and UK BioBank <cit.>, lack consensus on shared evaluation tasks / data processing pipelines and require navigating a research protocol review process, which creates challenges when curating shared ML workflows <cit.>. While the limitations of prior benchmarks were less apparent when developing small-scale, task-specific models, their utility is limited for evaluating FMs on task adaption, few-shot learning, and other properties of large-scale, self-supervised models <cit.>. Clinical FMs open up new questions, and a dataset for evaluating such FMs should contain a diverse range of tasks in low-label settings with longitudinal data <cit.>. Most importantly, such a benchmark should also release the weights of its pretrained models so the community can reproduce and build upon its results. Unfortunately, few FMs trained on EHR data have had their model weights published <cit.>. Our work helps address both shortcomings – a lack of public EHR datasets and pretrained clinical FMs – as one of the first combined releases of a research dataset and FM trained on EHR data. We outline our three primary contributions towards more reproducible ML for healthcare below: * We release EHRSHOT, a longitudinal EHR benchmark for the few-shot evaluation of clinical FMs. EHRSHOT contains the full coded medical timelines of 6,712 patients from Stanford Medicine. Records include demographics, diagnoses, procedures, laboratory results, medications, and other structured data, for a total of 39.2 million clinical events across 893,773 encounters. EHRSHOT contains an average of 2.1x more clinical events and 92.4x more encounters per patient than MIMIC-IV <cit.> and, unlike the majority of existing benchmarks, includes patients not seen in the ICU or emergency department (ED). * We publish the weights of a 141M parameter foundation model (CLMBR) pretrained on the deidentified structured data of 2.57M patients' EHRs. CLMBR was trained in a self-supervised manner to autoregressively predict the next code in a patient's timeline given their previous codes <cit.>. We are among the first to publish the full weights of such a clinical FM <cit.> for the community to evaluate and build upon. * We define a new few-shot benchmark of 15 patient classification tasks. Several tasks have naturally low prevalence, creating a realistic setting for few-shot experimentation. While our pretrained model offers significant AUROC/AUPRC gains in few-shot settings over a traditional supervised baseline, we demonstrate that there remains significant room for improvement on many of our tasks. Our overall workflow is shown in Figure <ref>. We publish the full code to replicate our results here: https://github.com/som-shahlab/ehrshot-benchmarkhttps://github.com/som-shahlab/ehrshot-benchmark. We also publish the full weights of our pretrained FM, as well as the EHRSHOT dataset and task labels, under a non-commercial data usage agreement. Links to can be found at our https://github.com/som-shahlab/ehrshot-benchmarkGithub repo. § RELATED WORK One of the most popular EHR datasets made accessible to researchers is MIMIC-III, which contains roughly 40,000 patients seen in the intensive care unit (ICU) of Beth Israel Deaconess Medical Center in Boston, Massachusetts, between 2001 and 2012 <cit.>. Other public datasets include eICU <cit.>, HiRID <cit.>, AmsterdamUMCdb <cit.>, CPRD <cit.>, MIMIC-IV <cit.>, and the UK BioBank <cit.>. Many of the aforementioned datasets are narrowly scoped to a single department: the ICU <cit.>. This makes it impossible to capture a patient's full health trajectory to the extent that an academic medical center or health system would know of the patients it treats. Other datasets such as MIMIC-IV include data from multiple departments, but are still heavily anchored to the ICU (only patients admitted for an ICU/ED visit are included) <cit.>. In contrast, our work releases the full longitudinal EHR of patients across all departments of a major academic medical center, thus providing a more realistic setting for general prediction making. Prior work often relied on the creation of bespoke schemas to store their data. These custom schemas greatly increase the difficulty of transferring models across datasets and sites <cit.>. In contrast, the data preprocessing pipeline that we use is capable of ingesting both EHRSHOT as well as any dataset following the Observational Medical Outcomes Partnership Common Data Model (OMOP-CDM), an open community data standard for sharing EHRs used by over 100 health systems <cit.>. More details on our data preprocessing pipeline can be found in the Appendix in Section <ref>. Previously published EHR datasets typically only provide raw data. Thus, significant additional effort was devoted to building standardized preprocessing pipelines, patient splits, and task definitions on top of these datasets <cit.>. These add-on benchmarks, however, are still limited by the narrow scope of their underlying data, and many recycle the same core set of tasks (i.e. in-patient mortality, long length-of-stay, ICU transfer, and ICD code prediction) <cit.>. Additionally, these benchmarks are typically not created with the purpose of measuring a pretrained model's few-shot performance <cit.>. This limits their utility in assessing the key value propositions of foundation models, such as improved sample efficiency and adaptation to diverse tasks. On the modeling side, substantial prior work exists on training FMs for EHR data <cit.>. However, the vast majority of these FMs have never had their weights published <cit.>. This greatly hinders reproducibility and makes cross-model evaluations difficult. Worse, this lack of sharing undermines a primary advantage of FMs: transfer learning, i.e. the ability to use the pretrained weights of an existing FM to shortcut model development for other tasks <cit.>. EHRSHOT aims to fill several of these gaps in the literature by providing a longitudinal EHR benchmark specifically geared towards few-shot evaluation of pretrained FMs. EHRSHOT is built on top of a cross-site interoperable standard (OMOP-CDM), and leverages an open source data preprocessing pipeline to allow other researchers to reproduce our results end-to-end. Additionally, we release the weights of an FM pretrained on structured EHR data, one of the first to do so. We provide additional points of comparison in Table <ref>. P[1]>p#1 § DATASET We are releasing EHRSHOT (pronounced "earshot"), an EHR benchmark for few-shot evaluation of foundation models. EHRSHOT is a collection of 6,712 unique patients with canonical train/validation/test splits and corresponding labels for 15 classification tasks. We also provide canonical k-shot samples for each few-shot evaluation task. Unlike prior EHR benchmarks focused on task-specific supervised models <cit.> for specific episodes of care, e.g. admission to the ICU <cit.>, our benchmark is designed for evaluating pretrained FMs on a broad range of tasks using the depth of information that a health system would typically possess for its patients. EHRSHOT is provided as a set of CSV files. It is essentially a lightweight serialization of the OMOP-CDM format. Please see Section <ref> in the Appendix for additional details on the dataset format. The dataset https://github.com/som-shahlab/ehrshot-benchmarkis available at our Github repo. §.§ Data Source We sourced the data for our benchmark from the Stanford Medicine Research Data Repository (STARR) <cit.>, which contains EHR data from both Stanford Health Care (primarily adult care) and Lucile Packard Children's Hospital (primarily pediatric care). The source dataset is structured according to the Observational Medical Outcomes Partnership Common Data Model (OMOP-CDM) <cit.> and comprises a total of 3.67M patient timelines from 1990 through February 8th, 2023 <cit.>. Of these patients, 2.57M (70%) are used for the training and 0.55M (15%) are used for validation of the foundation model that we release, the details of which we discuss in Section <ref>. All data that we work with is deidentified, and hence, our study did not require Institutional Review Board approval <cit.>. This source database contains demographics (e.g. age, sex, race), diagnoses, procedures, laboratory results, medication prescriptions, and other coded clinical observations, which we preserve. While the source database also contains clinical notes, we remove these in our released benchmark. We describe how we selected our patient cohort from this source dataset in the Appendix in Section <ref>. We apply a few additional transformations on top of those described in <cit.> to prevent data leakage and fix timestamp issues, which are detailed in Section <ref> in the Appendix. All of the code used to generate our dataset can be found here: https://github.com/som-shahlab/ehrshot-benchmarkhttps://github.com/som-shahlab/ehrshot-benchmark. §.§ Tasks We define 15 tasks as part of our benchmark, as listed in Table <ref>. We selected these tasks based on clinician input as well as alignment with prior benchmarks <cit.>. The tasks that we consider can be broadly grouped into the following 4 categories: (1) Operational outcomes, (2) Anticipating lab test values, (3) Assignment of new diagnoses, (4) Anticipating chest X-ray findings. All tasks are set up as classification tasks, with nine binary classification tasks as well as five 5-way multiclass tasks (lab test values) and one 14-way multilabel task (chest X-ray findings). The size of each task's subcohort, as well as the prevalence of positive labels, is detailed in Table <ref>. We define the precise prediction windows for each task in the Appendix Table <ref> and the definition of each task in Appendix Section <ref>. We also provide a high-level visualization of our 4 task categories in Figure <ref>. § BASELINE MODELS We measure the performance of two baseline models on our dataset: (1) a gradient boosting machine (GBM) that uses count-based featurizations of patients to make predictions, (2) an autoregressive language model ("CLMBR") that ingests medical codes as tokens and was pretrained on 2.57M Stanford patients <cit.>. We chose these two models as our baselines for several reasons. First, language modeling has achieved state-of-the-art results on clinical prediction tasks <cit.>, while count-based featurization remains a simple but competitive baseline <cit.>. Second, most prior FMs trained on structured EHR data have not had their model weights published, and were developed and tested exclusively on nonstandard data formats like MIMIC-III <cit.>. This makes it nearly impossible to conduct a fair comparison of prior models, which often requires re-implementation or significant modification to work across datasets <cit.>. This is one of the key challenges we are attempting to solve with EHRSHOT. We pre-train our own FM from scratch to have full control over its training, and publish its model weights so the community can reproduce and build upon our results. Count-based GBM. Count-based featurization is a well-established baseline for EHR tasks, valued for its simplicity and effectiveness <cit.>. The fundamental idea involves converting each patient's timeline into a count vector, where each element contains the number of occurrences of a specific medical concept prior to the prediction time of a task. These patient vectors are combined into a count matrix, which is high-dimensional and sparse. We use a technique called ontology expansion to increase the density of representation and improve the accuracy of code coverage by acknowledging the parent/child hierarchical relationships between medical concepts <cit.>. After generating our ontology-expanded count matrix, we train a gradient boosting machine (GBM) model on the EHRSHOT train split, and tune hyperparameters on the validation split. We use the LightGBM implementation <cit.> for our model. Clinical Language-Model-Based Representations (CLMBR). CLMBR is an autoregressive model designed to predict the next medical code in a patient's timeline given previous codes, thereby aiming to learn robust global patterns for clinical prediction tasks <cit.>. CLMBR employs causally masked local attention, ensuring forward-only flow of information which is vital for prediction tasks and is in contrast to BERT-based models which are bidirectional in nature <cit.>. CLMBR does not process clinical text. We utilize a transformer as our base model with 141M trainable parameters and a next code prediction objective, providing minute-level EHR resolution rather than the day-level aggregation of the original model formulation <cit.>. More details about our baseline models can be found in the Appendix in Section <ref>. § RESULTS We evaluate each model in a few-shot setting. We provide k positive and k negative examples from our training split as training data for each task, where k ∈{ 1, 2, 4, 8, 16, 32, 64, 128 }. We then sample k positive and k negative examples from our validation split to select the best hyperparameters per model per task. Finally, we evaluate the best performing model's AUROC and AUPRC on the held-out test split. For tasks where the total number of unique positive examples is t_p and t_p < k, we include all t_p positive examples in our training set and randomly resample t_p - k examples so that the total number of training examples seen by the model is k. For the count-based GBM model, these few-shot examples are the only training examples seen by the model. For the pretrained CLMBR models, we use these few-shot examples to fine-tune a logistic regression head appended to the top of the model for each task, while keeping the weights of the pretrained CLMBR model frozen. We do balanced sampling of k positive and k negative examples for each task during training. Training each CLMBR model took roughly 4 days on a single Nvidia A100. We used an on-premise cluster of A100s to run our training. The AUROC of each model across all 4 task categories is presented in Figure <ref>. In the Appendix, we break down each individual task's AUROC in Figure <ref>, and do the same for AUPRC in Figure <ref> and Figure <ref> in the Appendix. The bolded lines are the Macro-AUC for each model within a task category, averaged across all subtasks at each value of k. The pretrained foundation model CLMBR (orange) outperforms the count-based GBM (blue) on almost all tasks at k < 128. This demonstrates the benefits of pretraining in few-shot settings, as the model can leverage patterns learned across millions of patients to derive more accurate representations out-of-the-box than a model trained from scratch. However, the advantage of the pretrained model shrinks as k increases, a trend noted elsewhere <cit.>. This suggests that the advantage of pretraining comes primarily from improved initialization of patient representations, and that the largest gains are achieved in the most data poor regimes. As shown in Figure <ref>, the count-based GBM exhibits equal or better performance than CLMBR at higher values of k for several of the individual Assignment of New Diagnoses tasks, and a fairly large improvement over CLMBR when trained on the full dataset. There are several possible reasons for this. First, the CLMBR model's training objective is next code prediction, which makes it ill-suited for predictive tasks with long time horizons (1 year in this case). Second, if a simple tree-based model exists for a task (i.e. a few medical concepts tightly correlate with a diagnosis), then it may be more difficult for a pretrained model to coerce patient representations learned over millions of patients to the specific task than training a model from scratch with enough data to learn those distinctive signals. We believe that this reversal in model rankings demonstrates a key strength of EHRSHOT – namely, the diversity of its predictive tasks can help identify opportunities for improving pretraining and few-shot strategies. We release all of our model weights, evaluation tasks, and data processing code to fully reproduce our results. To the best of our knowledge, the release of our pretrained CLMBR model is one of the first examples of such a clinical FM having its pretrained weights made publicly available <cit.>. § DISCUSSION We believe that EHRSHOT represents a useful contribution to the ML community by enabling more reproducible healthcare ML research. The release of our pretrained CLMBR model's weights will allow the community to replicate and build upon our work. Our results identify opportunities for improving pretrained models in few-shot settings. Acquiring labeled EHR data is expensive and time-consuming. Additionally, certain rare conditions may only be present in a small cohort of patients out of millions within a health system <cit.>. Thus, model performance in low-label settings is of paramount importance in healthcare. As our results in Section <ref> demonstrate, pretrained FMs can yield large performance gains in few-shot settings. While we acknowledge that the tasks themselves may not be the most clinically meaningful, we believe that EHRSHOT offers a valuable contribution by providing a reproducible and rigorous point of comparison for different technical approaches to developing clinical FMs. Limitations. There are several limitations to this work. First, we only release structured data – i.e. we do not publish any of the clinical text or images associated with our patients. While many datasets for medical images exist <cit.>, publishing clinical text remains a challenge <cit.>. Second, we only consider one type of foundation model (CLMBR) for our experiments <cit.>. We look forward to seeing the additional foundation models that the community applies to our benchmark. Third, we release a very small cohort of patients (<1%) from our source EHR database, and specifically select these patients for the tasks that we define. While necessary in order to publish our EHR dataset, and while still broader than existing ICU-specific datasets, this cohort selection process limits the types of questions we can answer. Fourth, as we only were able to evaluate our pretrained FM on Stanford Medicine data, it is unclear how well our pretrained model will perform at other institutions. We anticipate there will be some drop in performance, but the extent is unclear. Societal Implications. In terms of societal implications, we believe that the release of this dataset can help spur positive innovations for improving clinical care with ML. However, we recognize that there are patient privacy concerns anytime EHR data is released. We believe we sufficiently mitigate this risk through the rigorous deidentification process on which our data is subjected <cit.>. Additionally, we gate access to the dataset through a research data use agreement. Another concern is that models trained on biased data will reflect those biases <cit.>. Thus, the pretrained FM that we release may propagate biases in care delivery or outcomes present in our source EHR database <cit.>. However, we hope that by encouraging the full release of models, we can help the community better identify and mitigate these issues <cit.>. § CONCLUSION We publish EHRSHOT, a benchmark containing the structured data of 6712 patients' full medical timelines specifically geared towards few-shot evaluation of foundation models for clinical data. Unlike most prior work, EHRSHOT contains longitudinal health data rather than a single department (e.g. ICU). We define a set of 15 tasks ranging from well-studied outcomes like 30-day readmission to lesser explored settings such as anticipating abnormal lab values. Finally, we release the weights of a foundation model pretrained on over 2.57M patient timelines and publish the code needed to replicate our results. We hope that this work represents a first step towards moving the field of ML for healthcare towards more reproducible and open model development. We thank the Stanford AIMI Center for their assistance in publishing this dataset. This work was supported in part by the Mark and Debra Leslie Endowment for AI in Healthcare and by Technology and Digital Solutions at Stanford Healthcare. MW is supported by an NSF Graduate Research Fellowship. § plain § SUPPLEMENTARY MATERIAL § AUTHOR RESPONSIBILITY STATEMENT The authors confirm that they bear all responsibility in case of violation of rights or licenses. § PUBLIC ACCESSIBILITY + LICENSES §.§ Dataset We release EHRSHOT under a research data use agreement. The dataset is available at the Stanford AIMI Center website, which is https://github.com/som-shahlab/ehrshot-benchmark/tree/mainlinked on our Github repo. Access is gated by a researcher data use agreement due to the sensitive nature of the dataset. We do not upload our dataset to another data repository due to these concerns. The Stanford AIMI Center is committed to the long-term preservation of the datasets it hosts, as well as their wider dissemination within the ML for healthcare community. The Stanford AIMI Center already hosts a number of datasets accepted into the NeurIPS Dataset Track <cit.> and thus has implemented procedures to comply with data release standards. In order to ensure we do not reveal Protected Health Information (PHI) in our dataset, we take several precautions. First, we only release deidentified data. The deidentification process has been previously described in <cit.>. Second, on top of this deidentification process, we also apply additional privacy-protecting transformations following the best practices of the MIMIC-III dataset <cit.>, which are detailed in Section <ref>. Third, we do not publish any clinical notes. Fourth, we release our dataset under a data usage agreement that requires researchers to register with their identity and gain approval before accessing the dataset. License: The license for the dataset is the standard Stanford University Dataset Research Use Agreement, and is reproduced below: By registering for downloads from the EHRSHOT Dataset, you are agreeing to this Research Use Agreement, as well as to the Terms of Use of the Stanford University School of Medicine website as posted and updated periodically at http://www.stanford.edu/site/terms/. Permission is granted to view and use the EHRSHOT Dataset without charge for personal, non-commercial research purposes only. Any commercial use, sale, or other monetization is prohibited. Other than the rights granted herein, the Stanford University School of Medicine (“School of Medicine”) retains all rights, title, and interest in the EHRSHOT Dataset. You may make a verbatim copy of the EHRSHOT Dataset for personal, non-commercial research use as permitted in this Research Use Agreement. If another user within your organization wishes to use the EHRSHOT Dataset, they must register as an individual user and comply with all the terms of this Research Use Agreement. YOU MAY NOT DISTRIBUTE, PUBLISH, OR REPRODUCE A COPY of any portion or all of the EHRSHOT Dataset to others without specific prior written permission from the School of Medicine. YOU MAY NOT SHARE THE DOWNLOAD LINK to the EHRSHOT Dataset to others. If another user within your organization wishes to use the EHRSHOT Dataset, they must register as an individual user and comply with all the terms of this Research Use Agreement. You must not modify, reverse engineer, decompile, or create derivative works from the EHRSHOT Dataset. You must not remove or alter any copyright or other proprietary notices in the EHRSHOT Dataset. The EHRSHOT Dataset has not been reviewed or approved by the Food and Drug Administration, and is for non-clinical, Research Use Only. In no event shall data or images generated through the use of the EHRSHOT Dataset be used or relied upon in the diagnosis or provision of patient care. THE EHRSHOT Dataset IS PROVIDED "AS IS," AND STANFORD UNIVERSITY AND ITS COLLABORATORS DO NOT MAKE ANY WARRANTY, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE, NOR DO THEY ASSUME ANY LIABILITY OR RESPONSIBILITY FOR THE USE OF THIS EHRSHOT Dataset. You will not make any attempt to re-identify any of the individual data subjects. Re-identification of individuals is strictly prohibited. Any re-identification of any individual data subject shall be immediately reported to the School of Medicine. Any violation of this Research Use Agreement or other impermissible use shall be grounds for immediate termination of use of this EHRSHOT Dataset. In the event that the School of Medicine determines that the recipient has violated this Research Use Agreement or other impermissible use has been made, the School of Medicine may direct that the undersigned data recipient immediately return all copies of the EHRSHOT Dataset and retain no copies thereof even if you did not cause the violation or impermissible use. In consideration for your agreement to the terms and conditions contained here, Stanford grants you permission to view and use the EHRSHOT Dataset for personal, non-commercial research. You may not otherwise copy, reproduce, retransmit, distribute, publish, commercially exploit or otherwise transfer any material. Limitation of Use: You may use EHRSHOT Dataset for legal purposes only. You agree to indemnify and hold Stanford harmless from any claims, losses or damages, including legal fees, arising out of or resulting from your use of the EHRSHOT Dataset or your violation or role in violation of these Terms. You agree to fully cooperate in Stanford’s defense against any such claims. These Terms shall be governed by and interpreted in accordance with the laws of California. §.§ Pretrained Foundation Model (CLMBR) We release CLMBR, a foundation model pre-trained on the structured EHR data of roughly 2.5 million patients at Stanford Medicine <cit.>. The model's weights can be found at https://github.com/som-shahlab/ehrshot-benchmark/tree/mainthe Github repo here. Access is gated by a researcher data use agreement due to the sensitive nature of the training dataset. A concern with the release of such a model is the lack of solid theoretical privacy assurances, thus creating the possibility of the model revealing medical data. To mitigate these concerns, we implement several additional precautions. First, the model is trained exclusively on deidentified data to eliminate the chance of any Protected Health Information (PHI) seeping into the model. Second, all unique text strings released as part of our CLMBR model's dictionary (e.g. terms such as "Yes" or "No" that serve as categorical variables) were manually reviewed to ensure they do not reveal any PHI. Third, we make our model available under a data usage agreement that requires researchers to register with their identity and gain approval before accessing the model. License: The license for the code for the model is here: https://github.com/som-shahlab/femr/blob/main/LICENSEhttps://github.com/som-shahlab/femr/blob/main/LICENSE. The license for the model weights is the same as the Stanford University Dataset Research Use Agreement license reproduced above. § DATASET DETAILS EHRSHOT contains a total of 39.2 million coded observations (e.g. diagnoses, procedures, medications, lab results, etc.) and 893,773 unique visits across 6,712 patients. §.§ Task Definitions Here, we detail the precise definitions for each of the 15 tasks for which we provide labels in our benchmark dataset. Operational Outcomes. These tasks are related to hospital operations. They are all binary classification tasks, and are defined as follows: * Long Length of Stay: Predict whether a patient's total length of stay during a visit to the hospital will be at least 7 days. The prediction time is at 11:59pm on the day of admission, and visits that last less than one day (i.e. discharge occurs on the same day of admission) are ignored. * 30-day Readmission: Predict whether a patient will be re-admitted to the hospital within 30 days after being discharged from a visit. The prediction time is at 11:59pm on the day of admission, and admissions where a readmission occurs on the same day as the corresponding discharge are ignored. * ICU Transfer: Predict whether a patient will be transferred to the ICU during a visit to the hospital. The prediction time is at 11:59pm on the day of admission, and ICU transfers that occur on the same day as admission are ignored. Anticipating Lab Test Results. These tasks are related to lab value prediction. They are all multiclass classification tasks. The prediction time is immediately before the lab result is recorded. They are defined as follows: * Thrombocytopenia: Predict whether a thrombocytopenia lab comes back as normal (>=150 10^9/L), mild (>=100 and <150 10^9/L), moderate (>=50 and <100 10^9/L), or severe (<50 10^9/L),. We consider all lab results coded as LOINC/LP393218-5, LOINC/LG32892-8, or LOINC/777-3. * Hyperkalemia: Predict whether a hyperkalemia lab comes back as normal (<=5.5 mmol/L), mild (>5.5 and <=6mmol/L), moderate (>6 and <=7 mmol/L), or severe (>7 mmol/L). We consider all lab results coded as LOINC/LG7931-1, LOINC/LP386618-5, LOINC/LG10990-6, LOINC/6298-4, or LOINC/2823-3. * Hypoglycemia: Predict whether a hypoglycemia lab comes back as normal (>=3.9 mmol/L), mild (>=3.5 and <3.9 mmol/L), moderate (>=3 and <3.5 mmol/L), or severe (<3 mmol/L). We consider all lab results coded as SNOMED/33747003, LOINC/LP416145-3, or LOINC/14749-6. * Hyponatremia: Predict whether a hyponatremia lab comes back as normal (>=135 mmol/L), mild (>=130 and <135 mmol/L), moderate (>=125 and <130 mmol/L), or severe (<125 mmol/L). We consider all lab results coded as LOINC/LG11363-5, LOINC/2951-2, or LOINC/2947-0. * Anemia: Predict whether an anemia lab comes back as normal (>=120 g/L), mild (>=110 and <120 g/L), moderate (>=70 and <110 g/L), or severe (<70 g/L). We consider all lab results coded as LOINC/LP392452-1. Please note that for the results of our baseline experiments in Section <ref>, we reframe these lab value tasks as binary classification tasks, where a label is "negative" if the result is normal and "positive" otherwise. Assignment of New Diagnoses. These tasks are related to predicting the first diagnosis of a disease. They are all binary classification tasks. The prediction time is at 11:59pm on the day of discharge from an inpatient visit, and we count any diagnosis that occurs within 365 days post-discharge as a positive outcome. We ignore all discharges in which the patient already has an existing diagnosis of a disease. The tasks are defined as follows: * Hypertension: Predict whether the patient will have her first diagnosis of essential hypertension within the next year. We define hypertension as an occurrence of the code SNOMED/59621000, as well as its children codes in our ontology. * Hyperlipidemia: Predict whether the patient will have her first diagnosis of hyperlipidemia within the next year. We define hyperlipidemia as an occurrence of the code SNOMED/55822004, as well as its children codes in our ontology. * Pancreatic Cancer: Predict whether the patient will have her first diagnosis of pancreatic cancer within the next year. We define pancreatic cancer as an occurrence of the code SNOMED/372003004, as well as its children codes in our ontology. * Celiac: Predict whether the patient will have her first diagnosis of celiac disease within the next year. We define celiac disease as an occurrence of the code SNOMED/396331005, as well as its children codes in our ontology. * Lupus: Predict whether the patient will have her first diagnosis of lupus within the next year. We define lupus as an occurrence of the code SNOMED/55464009, as well as its children codes in our ontology. * Acute MI: Predict whether the patient will have her first diagnosis of an acute myocardial infarction within the next year. We define myocardial infarction as an occurrence of the code SNOMED/57054005, as well as its children codes in our ontology. Anticipating Chest X-ray Findings. The chest X-ray findings task is a multilabel classification task to identify which of 14 possible findings were included in a chest X-ray report. The prediction time is 24 hours before the radiology report is recorded. The labels are derived by running the CheXpert NLP labeler on the unstructured text of the corresponding radiology report <cit.>. We do not release this unstructured text as part of our dataset due to patient privacy concerns. The possible findings are as follows: "No Finding", "Enlarged Cardiomediastinum", "Cardiomegaly", "Lung Lesion", "Lung Opacity", "Edema", "Consolidation", "Pneumonia", "Atelectasis", "Pneumothorax", "Pleural Effusion", "Pleural Other", "Fracture", "Support Devices". §.§ Dataset Format Our dataset is comprised of two main sets of tabular files: (A) Events files which contain all of the clinical events associated with every patient in our dataset, and (B) Labels files which contain the labels associated with all of our benchmark tasks for every patient in our dataset. (A) Events is as a set of CSV files containing every clinical event that happened to the patients in our dataset. Every row is a unique clinical event. Each CSV file shares the same column schema, which is as follows: * Patient ID - Integer - Unique identifier for patient * Start - Datetime - Start time of event * End - Datetime (optional) - End time of event * Code - String - Name of the clinical event (e.g. "SNOMED/3950001" or "ICD10/I25.110") * Value - Float/String (optional) - Either a numerical value associated with an event (e.g. a lab test result) or a string associated with a categorical variable (e.g. "Yes/No" questions) * Unit - String (optional) - Unit of measurement for Value * Visit ID - Integer (optional) - Unique identifier for the visit during which this event occurred * OMOP-CDM Table - String - Name of the source OMOP-CDM table where this event was recorded Every event is associated with the OMOP-CDM table in the source STARR database from which it was taken (OMOP-CDM Table) <cit.>. Researchers unfamiliar with the OMOP-CDM can simply ignore this column. (B) Labels is a set of CSV files containing the labels for every task for every patient. Every row is a unique label associated with a specific patient, task, and time point. Each CSV file shares the same column schema, which is as follows: * Patient ID - Integer - Unique identifier for patient * Prediction Time - Datetime - Time at which the prediction for this label is made * Value - Boolean / Integer - Value for this label. Boolean if task is binary classification. Integer if task is multiclass or multilabel classification. * Label Type - String - Type of task associated with this label. Can be "boolean" (binary classification), "numeric" (regression), "survival" (time-to-event), or "categorical" (multilabel or multiclass classification). §.§ Data Preprocessing The source dataset we use, the Stanford STARR research database <cit.>, is an Observational Medical Outcomes Partnership Common Data Model (OMOP-CDM) <cit.> compliant transformation of data extracted from Stanford's production EHR system (Epic). We do not alter any of the transformations or deidentifiation steps in the ETL used to generate this OMOP-CDM extract described in <cit.>. Following the best practices of the MIMIC-III dataset, we apply several additional custom transformations to prevent data leakage and add an additional layer of patient privacy protections <cit.>. First, we jitter all dates within each patient timeline by the same random amount (to a random year between 2100 and 2200). Second, we remove all patients >= 89 years of age. Third, we remove all instances of free form text (i.e., notes and narratives). For the clinical events which take on categorical values specified as strings (e.g. a questionnaire which can be answered "Yes" or "No"), we select the top-100 most representative such categorical text strings, manually verify that they do not contain any PHI, and remove the rest of the text strings from our model release by replacing them with blank strings. This preserves roughly 65% of all categorical values in our dataset. Fourth, we adjust the timing of certain events to more realistically reflect the chronology of care delivery. Specifically, we move any events recorded before a patient's birth to after their time of birth; we set the start times of visits equal to the start time of the first event in each visit; we move billing codes recorded during a visit to the end of the visit; we move any event coded at midnight to 11:59pm of that day; we remove all duplicate codes that occur sequentially on the same day; and we remove all codes with `None` values that occur on the same day as an identical code with a non-`None` value associated with it. These transformations are all specified in code https://github.com/som-shahlab/ehrshot-benchmark/in our Github repo. §.§ Cohort Selection Process We selected a cohort of 6,712 patients for EHRSHOT from the larger STARR source dataset of 3.67M patients. Per the motivation of this project, we were primarily interested in few-shot evaluation of models across diverse tasks. Several of the tasks that would be of interest to a health system, however, have fairly low prevalence within the general patient population. Thus, we needed to construct our cohort in a way that preserved sufficient positive labels to enable downstream models to conduct few-shot learning. We aimed to have at least k = 128 positive and negative examples in each of the train/val/test splits for every task that we considered in order to allow for a broad range of few-shot learning scenarios, and at least k = 128 positive examples for each label within a multiclass or multilabel classification task. Where this was not possible (e.g. the Celiac task), we included as many positive labels in each split as possible. We began with our set of 15 tasks of interest. For each task, we labeled all patients within our source database per that task's definition. For tasks that have a low prevalence (which we consider as a 1:5 ratio of positive to negative labels), we subsample negative labels to bring the prevalence of positive labels up to that ratio. We then subsample further for few-shot evaluation, selecting 128 unique patients for each split who have at least one positive label for the task. We then sample sufficient negative labels to maintain the chosen prevalence. We repeat this process for all tasks to arrive at our final cohort of patients. For each successive task, we prioritize selecting patients that have already been sampled into our cohort to reduce the total number of patients added to our cohort (since some patients have positive labels for multiple tasks). § RESULTS DETAILS In order to fully reproduce our results, please follow the instructions at our Github repo here: https://github.com/som-shahlab/ehrshot-benchmarkhttps://github.com/som-shahlab/ehrshot-benchmark. §.§ Problem Formulation Our dataset and models can be formulated as follows. Our dataset 𝒟 = ({ (𝐗_p, 𝐘_p) }_p = 1^|𝒫| contains the full coded medical timeline (𝐗_p) and task-specific set of labels (𝐘_p) for each patient p ∈𝒫, for a total of |𝒫| patients. Each patient p is defined by a sequence of clinical events 𝐗_p = { x_p1, x_p2, ..., x_pn}, where x_pi denotes the ith code in the timeline of patient p. Note that a code x_pi can be any form of structured data taken from the patient's EHR, including a diagnosis, procedure, medication prescription, lab test, etc. We define 𝐗_p^(t) to be the patient timeline up to time t – i.e. if event x_pj occurs before or at t but x_p(j+1) occurs after t, then if 𝐗_p = { x_p1, ..., x_pj, x_p(j+1), ..., x_pn} we have that 𝐗_p^(t) = { x_p1,..., x_pj}. In addition to the timeline of each patient, our dataset also contains labels for each task and patient. We define benchmark tasks b ∈ℬ, where |ℬ| = 15 for our dataset. Each patient has a set of labels 𝐘_p = { y^(t_1)_p b_1, y^(t_2)_p b_1,..., y^(t_L)_p b_|ℬ|}, where L is the total number of labels for patient p, and the expression y^(t_j)_p b_i represents the label for patient p for task b_i at time point t_j. We are interested in making predictions of the following format: Given a patient p's entire medical history up to and including time point t (i.e. 𝐗_p^(t)), predict the value of y^(t)_p b for each corresponding benchmark task b ∈ℬ where such a label exists. §.§ Count-Based GBM We train a LightGBM model as one of our baseline models. In order to train such a model on a patient's timeline, we must first featurize the timeline into a vector. We follow best practices for competitive baseline models by using count-based featurization, in which a patient is transformed into a vector containing the counts of how many times each clinical event has occurred in that patient's timeline prior to the prediction time point <cit.>. Let 𝒞 be the set of all unique medical codes in our dataset. Let us consider making a prediction for patient p at time t. Then the count-based featurization for p at time t is given by the vector 𝐩^(t)∈ℕ^|𝒞|, where each element is defined as 𝐩^(t)_i = ∑_x_j ∈𝐗^(t)_p I(x_j = i), i.e. the count of medical code i recorded for patient p before the prediction time t. Stacking these patient vectors results in a count matrix 𝐌∈ℕ^|𝐘| × |𝒞|. As there are hundreds of thousands of unique codes, most of which occur infrequently among patients, this results in a very high-dimensional and sparse matrix. To help address the sparseness of 𝐌, we use a technique called ontology expansion <cit.>, in which we count each occurrence of a code once for the code itself, and once for every parent node of that code in the OMOP ontology up to the root node of our ontology. Consider the ICD10 code E10.1 (Type 1 diabetes mellitus). Any occurrence of this code in a patient's timeline should also give the patient "credit" for having the parent codes of E10.1 – E10 (Type 1 diabetes mellitus) and E08–E13 (Diabetes mellitus). This is because having E10.1 implies that the patient has E10 and E08-E13. We leverage existing OMOP ontology tools for ontology expansion and map codes to their ancestors. Then, when constructing our count matrix 𝐌, we count each occurrence of a code for both that code and all of its parent codes. We refer to this ontology-expanded version of our count matrix as 𝐌'. Once the ontology-expanded count matrix 𝐌' is generated, a LightGBM model is trained on this input to predict the target label for each task <cit.>. Hyperparameter tuning is performed on a validation set following the schedule described in Table <ref>. §.§ CLMBR For CLMBR, each unique medical code c ∈𝒞 is associated with a d-dimensional embedding e^c ∈ℝ^d. Each medical code x_pi = c in patient p's timeline is associated with both a code embedding e^c and a position embedding e^s which is defined using rotary positions embeddings <cit.>. Thus, the input to the model for x_pi is given by the concatenation of these vectors, i.e. e^c ‖ e^s. For our model, the code embeddings e^c are generated using a standard embedding layer with a vocabulary size of |𝒞| = 65,536. Though there are more unique codes in our dataset, we only keep the top 65,536 codes with the highest contribution to the overall entropy of the dataset – the rest of the codes occurring in our dataset are discarded in order to keep the size of our model's dictionary tractable. Given this fixed dictionary, a classification task is defined to predict the next code in a patient's timeline given their preceding codes. The output of the transformer at each step i is a d-dimensional vector representation of the cumulative information up to and including event x_pi. We stack these representations for patient p into a matrix 𝐑_𝐩∈ℝ^|𝐗_𝐩| × d such that 𝐑_𝐩_i is the cumulative d-dimensional representation of all events up to and including event i for patient p. We then take the dot product of each row in this matrix with every code embedding e^j for all j ∈𝒞 in order to calculate a logit for each code j at each event i, thus yielding: logit_pij = 𝐑_𝐩_i · e^j. The model is then trained end-to-end using standard cross-entropy classification log-likelihood loss, employing an indicator variable I_pij to mark if the next event for patient p after event i is an event with code j. The overall loss function, L(I | logit), is computed as: L(I | logit) = ∏_p, i, j I_pij·softmax(logit_pi)_j P[1]>p#1